There's no criteria that requires to be satisfied in order to use iterated forcing. Not only that, but since iterating forcing is the same as taking a single forcing extension (using the iteration poset), the question sort of falls flat on itself.
Even worse, with the exception of a certain class of "minimal" generic extension, most (in some sense) forcing notions are in fact an iteration, since they can be decomposed to iteration of one or more subforcings. For example, adding a Cohen real can be thought of as adding two Cohen reals one after the other. And collapsing $\omega_1$ can be thought as first adding a Cohen real, then adding a branch to the Suslin tree added by that Cohen real, and then collapsing $\omega_1$.
So why do we even use iterated forcing?
Because it's convenient. Because it is easier to break down a large problem into smaller problems, and then deal with them, one at a time. When forcing Martin's Axiom, for example, it is easier to deal with the forcing notions one step at a time, rather than trying to somehow capture all of the existing ones, and the ones who would come, simultaneously.
Even worse. The iterative approach to Martin's Axiom is pure magic. Every limit step adds Cohen reals. Every Cohen real adds a Suslin tree. Martin's Axiom implies that there are no Suslin trees.
How does it happen? Well. Because of the very nature of the iteration, at each step we anticipate "a problem", and we solve it.
Other times we might want to construct an object via forcing, but our starting model would require to have certain objects which are not guaranteed to exist. Or perhaps the construction would require a certain degree of genericity over the model, so first adding something new to work with is a good thing. In those approaches we start with $V$, we extend it once with a preparation (which itself may or may not be an iteration, e.g. Martin's Axiom or indestructibility of large cardinals), and then perform one or two extensions to obtain a final model.
Yes, we can describe the whole thing as a single forcing poset. But why? It will offer no better result, and will only increase the difficulty when trying to describe your objects or reason as to why they have this or that property.
For this reason exactly it is sometimes convenient to think about a Cohen real as a subset of $\omega$, sometimes as a binary sequence in $2^\omega$, and sometimes as a general sequence in $\omega^\omega$. But sometimes it's easier to think about a single Cohen real as infinitely many different Cohen reals instead, exactly for that reason.
In general terms, the direct limit should contain only as much information as is contained in the previous iterations. So in this sense, if we have something in the direct limit, it should really just be something in one of the previous iterations but translated to the full iteration just by adding a bunch of trivial entries at the end.
The inverse limit, on the other hand, is the largest thing that still counts as an iteration. It should have "projections" down to the previous iterations in the sense that things in the inverse limit should still be constructed from previous iterations, but there's basically no limit to what these constructions could be: as long as the initial segments are all conditions in previous iterations, then the full length sequence works. In a more practical sense, the "projections" used are just the restriction maps: $p\mapsto p\upharpoonright \alpha$ for each $\alpha$. In this way, the inverse limit is the largest thing that still counts as an iteration, i.e. we can restrict conditions down. In a similar phrasing, the direct limit is the smallest thing that still counts as an iteration.
In another way, if we think of conditions built up through the iterations as a kind of tree ($p\upharpoonright\alpha\lhd p\upharpoonright\beta$ for $\alpha<\beta$) then the inverse limit corresponds to taking all the cofinal branches of this tree whereas the direct limit corresponds to taking all the bounded branches of this tree (or rather, those with bounded support).
It's much easier to understand these conditions in terms of support however, and so Jech's definition, while not particularly motivated, is probably the most practical understanding to have. The mantra really is just that direct limits have bounded support whereas inverse limits use full support in the sense that a $\kappa$-length iteration with support in an ideal $I\subseteq\mathcal{P}(\kappa)$ is the inverse limit of previous iterations iff the support obeys
$$\{x\in I:\sup(x)=\kappa\}=\{x\subseteq\kappa:\sup(x)=\kappa\wedge\forall\beta<\alpha\ (x\cap\beta\in I)\}\text{.}$$
And similarly, such an iteration is the direct limit of previous iterations iff the support obeys
$$ I\subseteq\{x\subseteq\kappa:\sup(x)<\kappa\}\text{,}$$
equivalently, $I=\bigcup_{\beta<\alpha}I\cap \mathcal{P}(\beta)$.
In practice, the two are often very useful because it's relatively easy to confirm that a particular construction is indeed a condition in the iteration. With inverse limits, there's really no question: if it's a limit of conditions in previous iterations, it's a condition in the limit iteration regardless of what the support ends up looking like. With direct limits, you just need to show that the support is bounded, which is often pretty simple. So you often will see results in the literature related to iterations that use either direct or inverse limits at every limit stage, like Easton iterations.
Best Answer
Let me try and give an example that might help and clarify. Consider the following scenario. We start with a model of $\sf GCH$, say $L$ for good measure, now we want to violate $\sf GCH$ on some of the $\aleph_n$s. This essentially defines a real number, but you want to be clever, and not use any of the real numbers already in the ground model.
So, as a start you're adding a Cohen real, $c\subseteq\omega$, then you force with $\prod_{n\in c}\operatorname{Add}(\omega_n,\omega_{n+2})$ to violate $\sf GCH$ exactly on those $\aleph_n$s whose index was in the Cohen real.
How would you describe this whole construction from $L$, then? There is no "obvious" partial order. But the Cohen forcing has an obvious partial order, and once the Cohen real was added, the next step also has an obvious partial order.
Now, that second step has a name, since it lives in $L[c]$, and so we can use that name and combine it with the Cohen forcing to create what is the iteration forcing in one go.
So, how would that work? Well, as we "progress" further along the Cohen real, we learn more, for example, will $59$ be in the Cohen real? We don't know at first, but then we do. Once a condition decided that, we know that our next forcing will have a factor of the form $\operatorname{Add}(\omega_{59},\omega_{61})$. This helps to "reveal" the second step one bit at a time.