You already know that first-order logic has a completeness theorem. That means that we can determine validity in first-order logic by looking at deductions - it makes proof theory possible. In second-order logic with full semantics, because there is no completeness theorem, to study things like validity we end up having to answer questions about the power set of the domain.
Here's an example. There is a sentence $\phi_1$ in the second-order language of ordered fields that characterizes the real numbers, up to isomorphism, in second-order logic with full semantics. There is another sentence $\phi_2$, in the language with just equality, which states that the domain has cardinality $\aleph_1$ (that is, any model of $\phi_2$ in second-order logic with full semantics has a domain of that cardinality). Now in order to show that $\phi_1 \to \phi_2$ in this logic, we would have to prove the continuum hypothesis, and to disprove that implication we would have to disprove the continuum hypothesis (this is because $\phi_1$ has only one model up to isomorphism).
Examples like this give us a sense that studying second-order logic with full semantics comes down, in many cases, to studying set theory. But if that's that case, many people say, why not just study set theory, as with ZFC? Studying set theory in the guise of "logic" only seems to obfuscate what's going on.
Moreover, for those who want to use the logic for foundational purposes, it is unattractive to pick a logic that seems to already have the answers to set-theoretic questions like the continuum hypothesis built into it - this goes against the idea that "logic" itself should make a minimal number of ontological assumptions.
This sort of argument was made in detail by Quine, who called second-order logic with full semantics "set theory in sheep's clothing". Not everyone agrees with this, and many people do use second-order logic with Henkin semantics as a way to keep the expressiveness without including the set theory. But the dominant opinion accepts Quine's argument.
I also recommend "The Road to Modern Logic-An Interpretation" by José Ferreirós, Bulletin of Symbolic Logic (2001), 441-484. This paper has a very nice historical study of the development of what is now called first-order logic.
This system of quantified propositional logic is straightforward to interpret into first-order logic. We make a theory $T$ that has a single, unary relation symbol, say $P$, and no other symbols in the signature, not even equality. Then, to quantify over "propositional variables", we quantify over elements in first order logic as usual. For each element $x$ in a model of $T$, $Px$ is either true or false, so the elements of the model can be treated as if they were propositional variables.
Thus the quantified propositional sentence $(\exists Q)(\forall R)[R \lor Q]$ is interpreted into $T$ as $(\exists q)(\forall r)[Pr \lor Pq]$. In this way, every sentence of quantified propositional logic is interpreted as a sentence of $T$, and vice-versa.
If we wanted to add constant symbols to $T$, that would be equivalent to adding constant (i.e. non-variable) propositional variables to quantified propositional logic.
I would suggest that the main reason that we don't bother having quantified propositional variables in the "usual" framework for first-order logic is that they are not useful for formalizing the typical mathematical theories (group theory, linear orders, set theory, arithmetic, etc.), and a central goal in most presentations of first-order logic is to be able to formalize these theories. The same holds for $\lambda$ terms to define functions. There is no reason that they could not be included in first-order theories, and in fact they sometimes are, but most presentations have no use for them.
Best Answer
I don't think there is any meaningful interpretation of "the second sense of complete" in this context that differs from the first. "Every true statement" could only mean "all statements true in every model." Reading the claim that way, completeness is exactly completeness in the first sense.
The reason for the distinction in the incompleteness theorem is that we have some particular model in mind (arithmetic), and we want to know if we can prove everything true in that model from our proof system together with our axioms. The completeness theorem says that our proof system is fine for such purposes. The incompleteness theorem says our axioms cannot be. Again, the distinction doesn't make sense if we're just talking about a proof system for some language.