What I mean by this is: how do we know it does not have unexpected consequences which may alter the rest of mathematics?
I will give a couple remarks, and also link to these MathOverflow discussions:
(1) In set theory, the study of large cardinals (much "larger" than just inaccessible) has been very fruitful. The existence of many of these large cardinals requires the existence of inaccessibles. So set theorists are interested in these large cardinals because of their useful (perhaps "startling") consequences. If there were no interesting consequences, set theorists would find other things to look at.
(2) From the skeptical POV, we don't know what the consequences might be. It could be that ZFC is consistent but ZFC plus the universe axiom is inconsistent. Many people come to feel that they have some intuition that the existence of universes is not inconsistent with ZFC. This belief often comes from thinking about the way that the cumulative hierarchy works. On the other hand, there is a manuscript by Kiselev (link) in which he claims to prove that the existence of even one inaccessible cardinal is inconsistent with ZFC.
We do know that ZFC cannot prove that there is even one inaccessible cardinal. And we know we cannot prove in ZFC that the existence of an inaccessible is consistent, because of limitations coming from the incompleteness theorems. So any argument that inaccessibles are consistent will have to use methods that cannot be formalized in ZFC.
(3) Temporarily adopt a Platonistic perspective, at least for the sake of argument. From this position, each "axiom" is either true or false, but it cannot alter the properties of mathematical objects, which exist separately from the axioms used to study them. Of course we can prove false statements from false axioms, but we can't actually change the objects themselves.
(4) Now temporarily reject Platonism, and think only about formal proofs. Then it will not make any difference to my conception of mathematics if someone else adopts an axiom that I don't accept. I will simply put a * beside all the theorems that use this axiom, and count them as dubious at best. I might even reprove some of the theorems without the new axiom just so I know they are OK. In this way, my personal conception of mathematics would also be unchanged by other people using different axioms.
I think that (3) and (4) start to indicate the way that philosophical issues will enter in when we ask about the effects of different axioms on "mathematics".
(This answer is marked as community wiki, as I already gave a different answer for this question. Please feel welcome to add more links to the list of links above.)
What you really want to prove here is the following:
If $\operatorname{cf}(\alpha)\neq\kappa=\operatorname{cf}(\kappa)$, then every subset of order type $\kappa$ is bounded.
This is trivial. If it's not bounded, then the cofinality of $\alpha$ is $\kappa$.
Now if you have any subset of size $\kappa$, it has a subset of type $\kappa$ as well.
Now look at your conditions. Each one is finite, since the iteration is
a finite support one (according to Jech's Multiple Forcing reference). Replace each condition with its maximal nontrivial coordinate. Either there is a coordinate containing $\kappa$ of them, or there is a set of size $\kappa$ of different maximal coordinates. Apply the above, and move on.
Best Answer
There's no criteria that requires to be satisfied in order to use iterated forcing. Not only that, but since iterating forcing is the same as taking a single forcing extension (using the iteration poset), the question sort of falls flat on itself.
Even worse, with the exception of a certain class of "minimal" generic extension, most (in some sense) forcing notions are in fact an iteration, since they can be decomposed to iteration of one or more subforcings. For example, adding a Cohen real can be thought of as adding two Cohen reals one after the other. And collapsing $\omega_1$ can be thought as first adding a Cohen real, then adding a branch to the Suslin tree added by that Cohen real, and then collapsing $\omega_1$.
So why do we even use iterated forcing?
Because it's convenient. Because it is easier to break down a large problem into smaller problems, and then deal with them, one at a time. When forcing Martin's Axiom, for example, it is easier to deal with the forcing notions one step at a time, rather than trying to somehow capture all of the existing ones, and the ones who would come, simultaneously.
Even worse. The iterative approach to Martin's Axiom is pure magic. Every limit step adds Cohen reals. Every Cohen real adds a Suslin tree. Martin's Axiom implies that there are no Suslin trees.
How does it happen? Well. Because of the very nature of the iteration, at each step we anticipate "a problem", and we solve it.
Other times we might want to construct an object via forcing, but our starting model would require to have certain objects which are not guaranteed to exist. Or perhaps the construction would require a certain degree of genericity over the model, so first adding something new to work with is a good thing. In those approaches we start with $V$, we extend it once with a preparation (which itself may or may not be an iteration, e.g. Martin's Axiom or indestructibility of large cardinals), and then perform one or two extensions to obtain a final model.
Yes, we can describe the whole thing as a single forcing poset. But why? It will offer no better result, and will only increase the difficulty when trying to describe your objects or reason as to why they have this or that property.
For this reason exactly it is sometimes convenient to think about a Cohen real as a subset of $\omega$, sometimes as a binary sequence in $2^\omega$, and sometimes as a general sequence in $\omega^\omega$. But sometimes it's easier to think about a single Cohen real as infinitely many different Cohen reals instead, exactly for that reason.