This system of quantified propositional logic is straightforward to interpret into first-order logic. We make a theory $T$ that has a single, unary relation symbol, say $P$, and no other symbols in the signature, not even equality. Then, to quantify over "propositional variables", we quantify over elements in first order logic as usual. For each element $x$ in a model of $T$, $Px$ is either true or false, so the elements of the model can be treated as if they were propositional variables.
Thus the quantified propositional sentence $(\exists Q)(\forall R)[R \lor Q]$ is interpreted into $T$ as $(\exists q)(\forall r)[Pr \lor Pq]$. In this way, every sentence of quantified propositional logic is interpreted as a sentence of $T$, and vice-versa.
If we wanted to add constant symbols to $T$, that would be equivalent to adding constant (i.e. non-variable) propositional variables to quantified propositional logic.
I would suggest that the main reason that we don't bother having quantified propositional variables in the "usual" framework for first-order logic is that they are not useful for formalizing the typical mathematical theories (group theory, linear orders, set theory, arithmetic, etc.), and a central goal in most presentations of first-order logic is to be able to formalize these theories. The same holds for $\lambda$ terms to define functions. There is no reason that they could not be included in first-order theories, and in fact they sometimes are, but most presentations have no use for them.
Your formula looks fine, as your implication, as @Derek Elkins said, combines validly built atomic formulas together, while these are in turn formed using the equality relation and a function applied to good, whatever this represents in your underlying language.
In think the second concern/question about equality or the equality symbol $=$ is a little more engaging. First, you should note that on a syntactical level, that is on the formal language level of first order logic, $=$ is just a symbol, initially with no attached meaning. The meaning is supplied via semantics. In first order logic, one often differentiates between first order logic with equality and first order logic without equality.
FO with equality incorporates $=$ as a prime symbol into the signature/language and forces its interpretation in a model $\mathfrak{M}=\langle M,\dots\rangle$ to be an object equality, i.e. (without getting into too much formalism) the syntactical structure $a=b$ with variables(or constants, terms) $a,b$ is true under an interpretation if the objects of $M$ assigned to $a$ and $b$ are the same, i.e. are equal on the meta-level.
FO without equality essentially omits this concept as a primitve. The reasons for distinction is largely historic and I omit any further details now.
What I think is more important, is that of course you can add a relation symbol to the signature/language as a custom equality symbol and enforce some kind of behaviour by restricting the desired models to those having the corresponding properties, e.g. through a corresponding axiomatization of a desired model class, that is you may specify the class of models you want to consider as the models of some set of first order sentences:
Say you might want to create another type of equality in some context, written syntactically in the formal language as $\equiv$ for which you may ask different properties from, e.g. transitive behaviour
$$\forall x\forall y\forall z:x\equiv y\rightarrow(y\equiv z\rightarrow x\equiv z)$$
Classical equality as perceived e.g. in mathematics of course has other properties as well like symmetry etc., which you may omit or add depending on whatever you try to model(where model is a great descriptor, as you restrict the class of models to those already satisfying your required properties).
Since it seems that your question comes from a more applied background, let me end with that it depends largely on what you try to model and how you want to model it, but you may always, and this is the great strength and diversity of FO compared to e.g. (in a drastic way) classical propositional logic, specify a signature of symbols representing functions and relations for some scenario for which you then can specify semantic properties syntactically through corresponding formulas.
EDIT: I'm sorry if I drove off a little bit, concerning your question I was not sure if you asked for soft discussion, elaboration of a concept or a more formal treatment of these questions.
Best Answer
Untyped $\lambda$-calculus and untyped combinatory terms both have a form of the $Y$ combinator, so we introduce types to avoid infinite reductions. As (untyped) systems, they are equivalent only as equational theories. To elaborate;
Take $(\Lambda , \rightarrow_{\beta\eta})$ to be the set of $\lambda$-terms equipped with the $\beta\eta$-relation (abstraction and application only, I am considering the simplest case) and $(\mathcal{C}, \rightarrow_R)$ be the set of combinatory terms equipped with reduction. We want a translation $T$, be a bijection between the two sets that respects reduction. The problem is that reduction in combinatory terms happens "too fast" when compared with $\beta\eta$, so such a translation does not exist. When viewed as equational theories, i.e. $(\Lambda , =_{\beta\eta})$ and $(\mathcal{C}, =_R)$, we can find a translation that respects equalities.
An interesting fact is that $=_{\beta\eta} \approx =_{ext}$, where $=_{ext}$ is a form of extensional equality on $\lambda$-terms.
To answer your question, when you view these systems as equational theories, they are both expressible (in FOL) since one of them is and this translation exists. But using just reduction, $\lambda$-calculus has the notion of bound variables, while combinatory logic does not. Bound variables require more expressibility in the system, more than FOL. You need to have access to FOL expressions in order to differentiate them from free variables, which you cannot do from inside the system.
Concepts related to, and explanatory of my abuse of terminology when I say "you need to have access to expressions of FOL" are the $\lambda$-cube and higher-order logics from formal systems, and "linguistic levels" from linguistics. Informally it is the level of precision the system in question has. Take for example the alphabet $\{a,b\}$ and consider all words from this alphabet. This language requires the minimum amount of precision to be expressed, since it is the largest language I can create from this alphabet. Now consider only words of the form $a^nb^n$. This language requires a higher level of precision for a system to be able to express it, because you need to be able to "look inside" a word to tell if belongs to the language or not. You can read more about this topic on https://devopedia.org/chomsky-hierarchy