I think that in some ways you have answered your question yourself: we see that to prove properties about sets, say within the projective hierarchy, we need representations of those sets of reals as trees, but moreover nice trees with certain properties (and the homogeneous trees you mention in particular with measures attachable to them). To get the latter involves measurable cardinals at least; and to get the determinacy of PD or AD we need to be able to shift those trees around in more subtle ways to prove that complements of nice trees, and projections of those complements etc, also have nice properties, and this requires Woodin cardinals etc...
Conversely we find descriptive set theoretic arguments involved in analysing the mouse components that go into the making of the higher inner models, in particular this is necessary for the so-called core model induction. To construct such models one has to have something of an inductive process to do so, and to start off, this involves an analysis of the lower levels of a putative model, and the description of those levels is, or can be seen as, descriptive set-theoretic on the sets of reals of the model. It just begins to look inevitable that DST and inner model theory will thus be inextricably linked.
Thus the fact that the Kechris-Martin theorem has two styles of proof starts to look like two sides of the same coin.
Whilst I am not really pretending that this is a comprehensive or full answer to your question, (I hesitated to answer this as I am not setting myself up as "an expert" who you ask for!)
one could add that
one aspect at least, is the following reply to the (simpler, side) question that is often asked "Why large cardinals" or "Why are such large trees, and concomitantly, large sets, measures etc needed for these analyses?" It is possible to see this, at least in the determinacy arena as simply an extension of what is needed to analyse Borel: Friedman showed that we would need iterations of the power set operation (and Collection to collect together the resulting sets)
with roughly speaking the number of iterations proceeding stepwise through the complexity of the Borel sets to show their determinacy (thus for $\alpha \geq \omega$ one would need $\alpha$ many iterates of power set etc to get $Det(\Sigma^0_\alpha)$). Martin's proof of Borel determinacy showed that Friedman had it exactly right: we already need larger and larger trees, or spaces if you like, in which to unravel Borel sets at a certain level and show their clopen representation (and hence determined status) in a corresponding larger space.
So: we exhaust the power of ZF to show there are enough good trees to establish Borel determinacy. To get higher levels of determinacy we are going to need to go beyond ZF and use strong axioms of infinity, i.e. large cardinals.
Thus the whole enterprise spans a spectrum through building models of fragments of second order number theory second order number (up to $Det(\Sigma^0_3$) say, through models of fragments ZF of certain uncountable ordinal height (Borel Determinacy, so we use "weak" inner models of fragments of ZF of set height here), and then full height of the ordinals models of ZF, (for $Det(\Pi^1_1)$) and now comes the same with inner models for larger cardinals, albeit more sophisticated arguments.
Forcing in set theory is usually performed to extend a model of set theory to another model $M[G]$ of set theory in which this or that statement of set theory (such as the continuum hypothesis) changes its truth-value. On the other hand, forcing over a model of arithmetic $M$ does not produce an extension of $M$, but a rather an expansion $(M,G)$.
A serious stumbling block for the application of forcing in $PA$ (Peano Arithmetic) to obtain independence results is a theorem of Gaifman that states that if $M$ and $N$ are models of $PA$ such that $M$ is cofinal in $N$, then $M$ is an elementary submodel of $N$, and in particular, $M$ and $N$ satisfy the same set of arithmetical sentences. Gaifman's proof, by the way, heavily relies on the so-called MRDP theorem (which states that every r.e. set is Diophantine).
Forcing has had an enormous success in clarifying definability questions in both arithmetic and set theory, but in comparsion, it has had rather limited efficacy so far in complexity theory, certainly not due to lack of effort or expertise: I know more than one expert in computational complexity whose résumé includes first rate papers on set-theoretical forcing.
Best Answer
See Diagonalizations over polynomial time computable sets in which two types of genericity introduced with which it examines complexity properties provable by simple diagonalizations over $P$.
See also An oracle builder's toolkit. The paper shows how various notions of genericity can be used to overcome difficulties arising out of interactions between coding and diagonalization techniques in oracle constructions. After an abstract definition of genericity in a general setting and a discussion of relativized genericity and some technical remarks for specialists, Cohen generics and forcing with finite functions are considered. General techniques to obtain results about Cohen generics are illustrated for some previously known oracle constructions, for example, for the existence of an oracle relative to which the isomorphism conjecture fails, but relative to which there are no one-way functions. To include requirements that can only be met by infinite extensions, the notion of SP-genericity (symmetric perfect genericity) is introduced.
Also the book Forcing with Random Variables and Proof Complexity is related and very useful. Here is the mathscinet review of the book:
-- Let me add more references:
Analytic sets in descriptive set theory and NP sets in complexity theory.
Abstract. The sets in the complexity class NP are definable by existentially quantifying (with a polynomial bound on lengths) polynomial-time computable binary relations. In descriptive set theory, analytic sets are definable by existentially quantifying Borel relations. This analogy between NP and analytic sets suggests that one might be able to solve problems in complexity theory by adapting the methods of descriptive set theory. Toward this goal, the author introduces a closely related pair of sets—W a set of infinite graphs and Wfin a set of finite graphs. Each consists of the graphs that do not admit cliques of a certain sort. The author shows that W is a complete coanalytic set and Wfin is a complete coNP set. It follows, by the hierarchy theorem of descriptive set theory, that W is not analytic, but this proof, by diagonalization, does not carry over to the finite case. The author therefore gives an elegant alternative proof that W is not analytic, by a combinatorial method that avoids diagonalization. One can hope that this proof can be adapted to the finite case, to prove that Wfin is not in NP. Of course, a proof of this would imply that NP is not closed under complementation and, in particular, P≠NP. So it is not surprising that the hoped-for finitization of the proof is not achieved. Nevertheless, the author obtains some results in this direction, producing a probabilistic conjecture which, if proved, would complete the argument, proving that Wfin is not in NP and not even in the non-uniform complexity class NP/Poly. The paper ends with a discussion of some variants of W and Wfin.
Descriptive set theory and Boolean complexity theory
Abstract. We propose a co-analytic subset whose finite equivient is co-NP-complete, and we study how a combinatorial proof of its non analyticity may lead to a proof of: "(non-uniform) NP ≠ (non-uniform)Co-NP"