Decidability - which means computability in this question - is always about a set of formulas. It does not make much sense to look for a single formula the satisfiability of which is not decidable.
This is because, if the formula is satisfiable, then the program that always outputs "Yes" will be correct, and if the formula is not satisfiable, the program that always output "No" will be correct. So, in a somewhat trivial way, the satisfiability of each individual formula is decidable. But this is so trivial that we don't view it as interesting.
What is true is that, when we consider sufficiently rich languages of first-order logic, there is no single program that, given an arbitrary formula, will decide whether that formula is satisfiable or not. The key point is that this is a problem involving an infinite set of formulas; the same program would have to decide correctly whether each of them is satisfiable.
It is still the case, of course, that there are individual statements of first-order logic such that we do not know whether those statements are satisfiable or not. For example, let $C$ be any unsolved conjecture, expressed in the language of set theory. Let $A$ be the conjunction of the finite set of axioms for NBG set theory (which is stronger than ZFC set theory). Then we will not know whether the formula $A \land \lnot C$ is satisfiable or not. We could take $C$ to be the Riemann hypothesis or any other unsolved problem that can be formalized in set theory.
The key point is that there are foundationally strong theories that have finite axiom systems, so we can code the entire axiom system as a single formula. That is also the key fact behind the proof that first-order logic is not decidable when the language is sufficiently rich.
A prefix of a formula is just some set of leftmost characters. So if $(A \wedge B) \vee C$ is well-formed in your notation, $(,$ and $(A \wedge B$ are both prefixes. Note that both these prefixes have more left brackets than right.
The usual way to prove these is by induction on complexity of the formula. First go through the atomic formulas you have and note that it is true. Then if you have a composition rule that says if $A$ and $B$ are well formed then $(A \wedge B)$ is also well formed, you note that any prefix of the new formula is either $($ plus a prefix of $A$, so will have at least as many more lefts than rights, or $(A$, (maybe with the $\wedge$), or $(A\wedge$ and a prefix of $B$. In any case you wind up with at least as many lefts as rights. You have to go through all the ways of building up formulas and show that the property is conserved.
Best Answer
I'd suggest first rephrasing it in English as a sentence that uses more and more of the formal logic expressions: something like
"For any towers x and y, x and y have the same color"
(Here I'm taking advantage of the fact that it is equivalent to say that a whole class of elements shares a property and to say that every pair of elements in that class shares the property).
then we can further rephrase this as
"For any towers x and y, x has color c if and only if y has color c"
and finally
"For any towers x and y, any c is the color of x if and only if it is the color of y"
Then we need to write that symbolically, remembering that quantifying over certain types of things (towers in our case) can be written as quantification over an implication:
"For all x and y, if x and y are towers, then any c is the color of x if and only if it is the color of y"
$\forall x \forall y \left ( \left ( \operatorname{Tower}(x) \& \operatorname{Tower}(y) \right ) \longrightarrow \forall c \left ( \operatorname{Color}(x,c)\leftrightarrow \operatorname{Color}(y,c) \right ) \right )$
If you want, you can pull the quantification over the $c$ variable out front:
$\forall x \forall y \forall c \left ( \left ( \operatorname{Tower}(x)\& \operatorname{Tower}(y) \right ) \longrightarrow \left ( \operatorname{Color}(x,c)\leftrightarrow \operatorname{Color}(y,c)\right ) \right )$