"Multiple comparisons" is the name attached to the general problem of making decisions based on the results of more than one test. The nature of the problem is made clear by the famous XKCD "Green jelly bean" cartoon in which investigators performed hypothesis tests of associations between consumption of jelly beans (of 20 different colors) and acne. One test reported a p-value less than $1/20$, leading to the conclusion that "green jelly beans cause acne." The joke is that p-values, by design, have a $1/20$ chance of being less than $1/20$, so intuitively we would expect to see a p-value that low among $20$ different tests.
What the cartoon does not say is whether the $20$ tests were based on separate datasets or one dataset.
With separate datasets, each of the $20$ results has a $1/20$ chance of being "significant." Basic properties of probabilities (of independent events) then imply that the chance all $20$ results are "insignificant" is $(1-0.05)^{20}\approx 0.36$. The remaining chance of $1-0.36 = 0.64$ is large enough to corroborate our intuition that a single "significant" result in this large group of results is no surprise; no cause can validly be assigned to such a result except the operation of chance.
If the $20$ results were based on a common dataset, however, the preceding calculation would be erroneous: it assumes all $20$ outcomes were statistically independent. But why wouldn't they be? Analysis of Variance provides a standard example: when comparing two or more treatment groups against a control group, each comparison involves the same control results. The comparisons are not independent. Now, for instance, "significant" differences could arise due to chance variation in the controls. Such variation could simultaneously change the comparisons with every group.
(ANOVA handles this problem by means of its overall F-test. It is sort of a comparison "to rule them all": we will not trust group-to-group comparison unless first this F-test is significant.)
We can abstract the essence of this situation with the following framework. Multiple comparisons concerns making a decision from the p-values $(p_1, p_2, \ldots, p_n)$ of $n$ distinct tests. Those p-values are random variables. Assuming all the corresponding null hypotheses are logically consistent, each should have a uniform distribution. When we know their joint distribution, we can construct reasonable ways to combine all $n$ of them into a single decision. Otherwise, the best we can usually do is rely on approximate bounds (which is the basis of the Bonferroni correction, for instance).
Joint distributions of independent random variables are easy to compute. The literature therefore distinguishes between this situation and the case of non-independence.
Accordingly, the correct meaning of "independent" in the quotations is in the usual statistical sense of independent random variables.
Note that an assumption was needed to arrive at this conclusion: namely, that all $n$ of the null hypotheses are logically consistent. As an example of what is being avoided, consider conducting two tests with a batch of univariate data $(x_1, \ldots, x_m)$ assumed to be a random sample from a Normal distribution of unknown mean $\mu$. The first is a t-test of $\mu=0$, with p-value $p_1$, and the second is a t-test of $\mu=1$, with p-value $p_2$. Since both cannot logically hold simultaneously, it would be problematic to talk about "the null distribution" of $(p_1, p_2)$. In this case there can be no such thing at all! Thus the very concept of statistical independence sometimes cannot even apply.
It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I find that he does not make the issue clear enough -- so I am not surprised to see your confusion.
The important point to realize is that Colquhoun is talking about the situation without any multiple comparison adjustments. One can understand Colquhoun's paper as adopting a reader's perspective: he essentially asks what false discovery rate (FDR) can he expect when he reads scientific literature, and this means what is the expected FDR when no multiple comparison adjustments were done.
Multiple comparisons can be taken into account when running multiple statistical tests in one study, e.g. in one paper. But nobody ever adjusts for multiple comparisons across papers.
If you actually control FDR, e.g. by following Benjamini-Hochberg (BH) procedure, then it will be controlled. The problem is that running BH procedure separately in each study, does not guarantee overall FDR control.
Can I safely assume that in the long run, if I do such analysis on a regular basis, the FDR is not $30\%$, but below $5\%$, because I used Benjamini-Hochberg?
No. If you use BH procedure in every paper, but independently in each of your papers, then you can essentially interpret your BH-adjusted $p$-values as normal $p$-values, and what Colquhoun says still applies.
General remarks
The answer to Colquhoun's question about the expected FDR is difficult to give because it depends on various assumptions. If e.g. all the null hypotheses are true, then FDR will be $100\%$ (i.e. all "significant" findings would be statistical flukes). And if all nulls are in reality false, then FDR will be zero. So the FDR depends on the proportion of true nulls, and this is something that has be externally estimated or guessed, in order to estimate the FDR. Colquhoun gives some arguments in favor of the $30\%$ number, but this estimate is highly sensitive to the assumptions.
I think the paper is mostly reasonable, but I dislike that it makes some claims sound way too bold. E.g. the first sentence of the abstract is:
If you use $p=0.05$ to suggest that you have made a discovery, you will be wrong at least $30\%$ of the time.
This is formulated too strongly and can actually be misleading.
Best Answer
Benjamini and Hochberg (1995) introduced the false discovery rate. Benjamini and Yekutieli (2001) proved that the estimator is valid under some forms of dependence. Dependence can arise as follows. Consider the continuous variable used in a t-test and another variable correlated with it; for example, testing if BMI differs in two groups and if waist circumference differs in these two groups. Because these variables are correlated, the resulting p-values will also be correlated. Yekutieli and Benjamini (1999) developed another FDR controlling procedure, which can be used under general dependence by resampling the null distribution. Because the comparison is with respect to the null permutation distribution, as the total number of true positives increases, the method becomes more conservative. It turns out that BH 1995 is also conservative as the number of true positives increases. To improve this, Benjamini and Hochberg (2000) introduced the adaptive FDR procedure. This required estimation of a parameter, the null proportion, which is also used in Storey's pFDR estimator. Storey gives comparisons and argues that his method is more powerful and emphasizes the conservative nature of 1995 procedure. Storey also has results and simulations under dependence.
All of the above tests are valid under independence. The question is what kind of departure from independence can these estimates deal with.
My current thinking is that if you don't expect too many true positives the BY (1999) procedure is nice because it incorporates distributional features and dependence. However, I'm unaware of an implementation. Storey's method was designed for many true positives with some dependence. BH 1995 offers an alternative to the family-wise error rate and it is still conservative.
Benjamini, Y and Y Hochberg. On the Adaptive Control of the False Discovery Rate in Multiple Testing with Independent Statistics. Journal of Educational and Behavioral Statistics, 2000.