From your question and in particular your comments to other answers, it seems to me that you are mainly confused about the "big picture" here: namely, what does "positive dependency" refer to in this context at all -- as opposed to what is the technical meaning of the PRDS condition. So I will talk about the big picture.
The big picture
Imagine that you are testing $N$ null hypotheses, and imagine that all of them are true. Each of the $N$ $p$-values is a random variable; repeating the experiment over and over again would yield a different $p$-value each time, so one can talk about a distribution of $p$-values (under the null). It is well-known that for any test, a distribution of $p$-values under the null must be uniform; so, in the case of multiple testing, all $N$ marginal distributions of $p$-values will be uniform.
If all the data and all $N$ tests are independent from each other, then the joint $N$-dimensional distribution of $p$-values will also be uniform. This will be true e.g. in a classic "jelly-bean" situation when a bunch of independent things are being tested:
However, it does not have to be like that. Any pair of $p$-values can in principle be correlated, either positively or negatively, or be dependent in some more complicated way. Consider testing all pairwise differences in means between four groups; this is $N=4\cdot 3/2=6$ tests. Each of the six $p$-values alone is uniformly distributed. But they are all positively correlated: if (on a given attempt) group A by chance has particularly low mean, then A-vs-B comparison might yield a low $p$-value (this would be a false positive). But in this situation it is likely that A-vs-C, as well as A-vs-D, will also yield low $p$-values. So the $p$-values are obviously non-independent and moreover they are positively correlated between each other.
This is, informally, what "positive dependency" refers to.
This seems to be a common situation in multiple testing. Another example would be testing for differences in several variables that are correlated between each other. Obtaining a significant difference in one of them increases the chances of obtaining a significant difference in another.
It is tricky to come up with a natural example where $p$-values would be "negatively dependent". @user43849 remarked in the comments above that for one-sided tests it is easy:
Imagine I am testing whether A = 0 and also whether B = 0 against one-tailed alternatives (A > 0 and B > 0). Further imagine that B depends on A. For example, imagine I want to know if a population contains more women than men, and also if the population contains more ovaries than testes. Clearly knowing the p-value of the first question changes our expectation of the p-value for the second. Both p-values change in the same direction, and this is PRD. But if I instead test the second hypothesis that population 2 has more testes than ovaries, our expectation for the second p-value decreases as the first p-value increases. This is not PRD.
But I have so far been unable to come up with a natural example with point nulls.
Now, the exact mathematical formulation of "positive dependency" that guarantees the validity of Benjamini-Hochberg procedure is rather tricky. As mentioned in other answers, the main reference is Benjamini & Yekutieli 2001; they show that PRDS property ("positive regression dependency on each one
from a subset") entails Benjamini-Hochberg procedure. It is a relaxed form of the PRD ("positive regression dependency") property, meaning that PRD implies PRDS and hence also entails Benjamini-Hochberg procedure.
For the definitions of PRD/PRDS see @user43849's answer (+1) and Benjamini & Yekutieli paper. The definitions are rather technical and I do not have a good intuitive understanding of them. In fact, B&Y mention several other related concepts as well: multivariate total positivity of order two (MTP2) and positive association. According to B&Y, they are related as follows (the diagram is mine):
$\hskip{10em}$
MTP2 implies PRD that implies PRDS that guarantees correctness of B-H procedure. PRD also implies PA, but PA$\ne$PRDS.
Best Answer
Two points
Point the First
If you are using something like R's
p.adjust()
to calculate $q$ values, then the 1 values simply indicate not rejected at any level of FDR. $q$-values are actually a little problematic to interpret directly, since they have a subtle mathematical artifice and because they do not communicate the step-wise nature of the FDR adjustment process (and one cannot make FDR rejection decisions based on $q$-values alone). Backing up to a single two-sided hypothesis can help illustrate why:Reject $H_{0}$ if $p \le \alpha/2$. So for $\alpha = 0.05$, we would reject $H_{0}$ if $p \le 0.025$. Alternately, we could express this same rejection criterion as reject $H_{0}$ if $2p \le \alpha$. The first expression perhaps emphasizes the meaning of $p$, and the second emphasizes the meaning of $\alpha$.
If we think about the Bonferroni method (FWER, not FDR), we can see that we have two way to express the rejection criterion given $m$ number of comparisons:
Reject $H_{0}$ if $p \le \frac{\alpha/2}{m}$, or
Reject $H_{0}$ if $2mp \le \alpha$.
That $2mp$ is an 'adjusted $p$-value', sometimes called a '$q$-value'.
(I suppose there's also a third way: reject $H_{0}$ if $mp \le \alpha/2$.)
But look: $2mp$ is $>1$ when $p>.5/m$, which is quite possible. Unfortunately $p$ (or $q$) is supposed to be a probability which means that it's value is strictly bounded by zero and one inclusive. So many folks, and many statistical software authors will take an expression like $q = mp$, and replace it with $q=\max(1,mp)$. The same applies to FDR (whether using the Benjamini-Hochberg or Benjamini-Yekutieli method)... the adjustments are more complicated than the Bonferroni, but they cap the $q$-value results at 1.
In a way, I suspect that this implies that expressing such adjustments as adjustments of rejection levels, rather than adjustments to $p$-values is a little more coherent, because the artifice of $\max(1,f(p,i))$ does not apply.
Point the Second
We can't tell for sure because you have not provided, for example, your vector of $p$-values, but the likelihood is that your $p$-values are all too high, and that you are not achieving significance.