Set theory is of course completely saturated with this feature, since the independence phenomenon means that a huge proportion of the most interesting natural set-theoretic questions turn out to be independent of the basic ZFC axioms. Thus, most of the interesting work in set theory is about the relations beteween these various independent statements. They typically have the form of implications assuming the truth of a hypothesis not known to be true (and often, known in some sense not to be provably true), and therefore are instances of what you requested. The status of these various hypotheses as conjectures, however, to use the word you use, has given rise to vigorous philosophical debate in the foundations of mathematics and set theory, as to whether or not they have definite truth values and how we could come to know them.
Examples of such hypothesis that are used in this way would include all of the main set-theoretic hypotheses known to be independent. This list would run to several hundred natural statements, but let me list just a few:
- The Continuum Hypothesis (also the Generalized Continuum Hypothesis)
- The negation of the Continuum Hypothesis
- Martin's Axiom
- More generally, other forcing axioms, such as PFA or MM
- Cardinal characteristics relations, such as b < d
- The entire large cardinal hierarchy
This last example is extremely important and a unifying instance of what you requested, for the large cardinal hierarchy is a tower of increasingly strong hypotheses, which we believe to be consistent, but haven't proved, and indeed, provably cannot prove, to be consistent, unless set theory itself is inconsistent. From any level of the large cardinal hierarchy, if consistent, we provably cannot prove the consistency of the higher levels.
So in this sense, the large cardinal hierarchy provides enormous iterated towers of your phenomenon.
This might seem at first to be a flaw. Why would we be interested in these large cardinals, if we cannot prove they exist, cannot prove that their existence is consistent, and indeed, can prove that we cannot prove they are consistent, assuming our basic axioms are consistent? The reason is that because of Goedel's incompeteness theorem, we know and expect to find such statements, that are not settled, even when we assume Con(ZFC) and more. Thus, we know there is hierarchy of consistency strength towering above us. The remarkable thing is that this tower turns out to be describable in terms of the very natural infinite combinatorics of large cardinals. These were notions, such as inaccessible, Ramsey and measurable cardinals, that arose from natural questions about infinite combinatorics, independently of any considerations of consistency strength.
Some of the most interesting uses of large cardinals have been equiconsistencies between large cardinals and other natural mathematical statements. For example, the impossibility of removing AC from the Vitali construction of a non-measurable set is exactly equiconsistent with the existence of an inaccessible cardinal. And the complete determinacy of infinite integer games (with not AC) is equiconsistent with the existence of infinitely many Woodin cardinals.
Best Answer
Because there are natural computational problems involving many mathematical objects, there are a bunch of implications of complexity class separations like $\mathrm{P} \neq \mathrm{NP}$. I think the first paper to investigate this idea is probably Mike Freedman's Complexity classes as mathematical axioms, which assumes a complexity class separation (namely $\mathrm{NP} \neq \mathrm{P}^{\#\mathrm{P}}$, which is stronger than $\mathrm{P} \neq \mathrm{NP}$) to prove that knots with certain properties exist.
The main idea of all these arguments is to prove an implication like "If all objects of type $T$ satisfy property $P$, then there is an efficient algorithm for a problem which we assumed has no efficient algorithm." You can then deduce the existence of objects of type $T$ which satisfy property $\neg P$. (Here the meaning of "efficient" depends on the class separation you assume.)
The exact thing Freedman proves is a little esoteric, so let me give two other examples that have a somewhat similar flavor.
The systole of a metric manifold is the length of the shortest non-contractible loop on it (I'll also use the word for the shortest loop itself). Probably the first manifold whose systole you'd try to understand is a flat torus $\mathbb{T}^d = \mathbb{R}^d / \Lambda$ for some lattice $\Lambda = \langle v_1, \dots, v_d \rangle$, since these are pretty much the simplest metric manifolds.
One natural thing might be to try to say something about the word length of the systole $\gamma$ when considered as an element of $\pi_1(\mathbb{T}^d)$ equipped with the generating set $\{ v_1^*, \dots, v_d^* \}$ where $v_i^*$ is the loop in $\mathbb{T}^d$ naturally associated to $v_i$. For example, maybe the systole always has a relatively short word expressing it. Say, maybe we can always write $\gamma =_{\pi_1(\mathbb{T}^d)} \sum_i n_i v_i^*$ with $\sum_i |n_i| < \sqrt{d}$ or something.
It turns out that a modest strengthening of $\mathrm{P} \neq \mathrm{NP}$ actually rules this out. Specifically, assuming that $\mathrm{NP}$-hard problems do not have time $2^{o(n)}$ bounded-error probabilstic algorithms, we can prove the following: For any $\ell(d) = o(d / \log d)$, there exist infinitely many $d$ and $\Lambda=(v_1, \dots, v_d)$ such that every systolic loop $\gamma$ on the torus $\mathbb{R}^d / \Lambda$ has word length at least $\ell(n)$ in the generating set $\{ v_1^*, \dots, v_d^* \}$.
The idea of the argument is just that such a bound would imply a sub-exponential time algorithm for the $\mathrm{NP}$-hard problem of computing the systole: namely just enumerate all possible words in the generators of the given word length and pick the one which is represented by the shortest loop (this is easy to compute).
You can use a similar argument also to show that faithful representations of $S_n$ have dimension at least $n^{\varepsilon}$ for some $\varepsilon > 0$ (assuming some version of the strong exponential time hypothesis.) The basic idea is given here -- the argument is very simple.
These arguments I think give some idea of why proving class separations should be hard. They immediately imply that mathematical objects of all kinds must have certain kinds of complexity, or else they could be used to give algorithms contradicting the class separations. So, the class separation simultaneously demonstrates the existence of such complexity in all such mathematical objects at once.
Bonus: Tim Roughgarden and Inbal Talgam-Cohen have some writing along these lines as well showing that class separations imply markets in which certain kinds of equilibria do not exist.