If I understood correctly, you are given a vertex set, e.g. $V = \{1,2,3,4,5,6,7\}$. You pick $j_1 = \{1,2,4\}$ (for example), and $j_2 = \{2,1,3\}$. The edge set $\{12,14,24\}$ forms the first triangle, and the edge set $\{12,23,13\}$ forms the second triangle. The cardinality of the union of the two edge sets is $5$, and hence $P(\mathcal{E}_{j_1}\cap\mathcal{E}_{j_2}) = p^5$. Since you allow to pick $\mathcal{E}_j$s arbitrarily, I see no way but to calculate the edge sets, take their unions and then calculate the cardinality, raise $p$ to that cardinality.
The usual way to do this is as follows:
Suppose that your graph IS bipartite. Then there is a partition $[n]=A\cup B$, $A\cap B=\varnothing$, such that all edges in the graph have one end in $A$ and one end in $B$.
Let $S$ be the number of such partitions -- that is, the number of ways we can write $[n]=A\cup B$, $A\cap B=\varnothing$, such that there are no edges inside $A$ and no edges inside $B$. Then the probability that your graph is bipartite is precisely $P(S>0)$. But, by Markov's inequality, we know that
$$
P(S>0)=P(S\geq 1)\leq\frac{\mathbb{E}[S]}{1}=\mathbb{E}[S],
$$
where $\mathbb{E}$ denote the expectation. But, since expectations break up over addition, we have
$$
\mathbb{E}[S]=\sum_{\substack{[n]=A\cup B\\A\cap B=\varnothing}}P(\text{no edges within $A$ or $B$}).
$$
Consider this probability. This is equivalent to saying "No vertex in $A$ chooses a neighbor in $A$ and no vertex in $B$ chooses a neighbor in $B$". Since the choices are made independently, this simplifies a lot:
$$
P(\text{no edges within $A$ or $B$})=\prod_{v\in A}P(\text{$v$ chooses no neighbors in $A$})\cdot\prod_{v\in B}P(\text{$v$ chooses no neighbors in $B$})
$$
For a fixed $v$ and a fixed set $A$,
$$
P(\text{$v$ chooses no neighbors in $A$})=\frac{\binom{b+s-1}{s-1}}{\binom{n+s-1}{s-1}},
$$
where $\newcommand{\order}[1]{\lvert #1 \rvert}b:=\order{B}$. Why? Because $v$ not choosing elements of $A$ means that it only chooses elements of $B$; the number of ways to choose $s$ elements from $b$, with replacement, is $\binom{b+s-1}{s-1}$. Etc.
We get a simiar result for $v\in B$, except using $a:=\order{A}$ in place of $b$. So, this says
$$
\begin{align*}
\mathbb{E}[S]&=\sum_{a+b=n}\sum_{\substack{[n]=A\cup B\\\order{A}=a,\order{B}=b}}\left[\frac{\binom{a+s-1}{s-1}}{\binom{n+s-1}{s-1}}\right]^b\left[\frac{\binom{b+s-1}{s-1}}{\binom{n+s-1}{s-1}}\right]^a\\
&=\binom{n+s-1}{s-1}^{-n}\sum_{a+b=n}\sum_{\substack{[n]=A\cup B\\\order{A}=a,\order{B}=b}}\binom{a+s-1}{s-1}^b\binom{b+s-1}{s-1}^a\\
&=\binom{n+s-1}{s-1}^{-n}\sum_{a=1}^{n-1}\binom{n}{a}\binom{a+s-1}{s-1}^b\binom{b+s-1}{s-1}^a.
\end{align*}
$$
Here, we have used the fact that determining $a$ determines $b$, and determining $A$ determines $B$... and the fact that there are $\binom{n}{a}$ ways to choose $A$, given $a$.
Best Answer
This is a really good beginner question, and goes through one of the foremost notions in stochastic theory : that of stochastic domination. It avoids any large computation and provides a natural notion which is easy to prove via smart coupling.
$\underline{\mathit{Stochastic \ domination}}$
Definition : Suppose that $X,Y$ are real valued random variables. Then $X$ is said to (first order) stochastically dominate $Y$ if $P(X \geq t) \geq P(Y \geq t)$ for every real $t$.
This is not to be confused with $X \geq Y$ almost surely. It just means that for any $t$, it is likelier that $X$ is larger than $t$ than $Y$ is larger than $t$.
Now, if $G$ is picked as specified in the question, then one can talk about $T(p)$, the number of triangles present in $RG(p)$.
The claim is that $T(p)$ stochastically dominates $T(q)$. If this is true, then $P(T(p) \geq 10) \geq P(T(q) \geq 10)$, as desired.
$\underline{\mathit{A \ theorem\ for\ proving\ stochastic\ domination}}$
The most common method of proving stochastic domination is to create a coupling between the random variables in question. That is, provide a method of constructing random samples from both distributions, even if they may be dependent on each other.
To make this rigorous, we have the notion of coupling : given random variables $X$ and $Y$, we call a random variable $M = (M_1,M_2)$ a coupling of $X$ and $Y$ if the first marginal of $M$ i.e. $M_1$ has the same distribution as $X$, and $M_2$ has the same distribution as $Y$.
To give an example : take $X$ and $Y$ to be independent $Ber(\frac 12)$ random variables. We can couple them independently : let $M$ be the tuple consisting of two independent coin tosses, then the first and second marginal are $X$ and $Y$. But there's another coupling too : flip the first coin , call it $M_1$, and then set $M_2 = M_1$. Then both $M_1$ and $M_2$ are $Ber(\frac 12)$ but clearly dependent on each other. So couplings can occur in many ways.
A popular theorem in this regard is the following :
I will leave the proof as an exercise for you to look up or to try yourself, it is pretty much from definition.
$\underline{\mathit{Coupling \ RG(p)\ and \ RG(q)}}$
So how do we couple these random graphs? We go edge-by-edge, a very popular coupling for graphs which are constructed edge-by-edge (so called Erdos-Renyi graphs). Each edge is present or not with probability $p$ for $RG(p)$ and $q$ for $RG(q)$.
Here's our coupling. Do the following for every pair of vertices in the graph :
Sample a $Z_p = Ber(p)$ random variable.
For a certain $r$, sample a $Z_r = Ber(r)$ random variable, independent of $Z_p$, such that $Z_q = Z_pZ_r$ has the distribution $Ber(q)$. I leave you to calculate $r$ in terms of $p$ and $q$. See that $r > 0$ if $p>q$.
Now, for $RG(q)$, set an edge between these vertices if $Z_q = 1$, and for $RG(p)$ set an edge between these vertices if $Z_p = 1$.
That's your coupling! Indeed, for $RG(q)$, each edge is in or not with probability $q$, and similarly for $RG(p)$.
$\underline{\mathit{Finishing \ the \ problem}}$
The nice thing about this construction is that if $Z_q = 1$ then $Z_p = 1$! Therefore , if there's an edge in a random graph of $RG(q)$ from the above construction, then there's an edge in the coupled random graph produced by $RG(p)$ as well!
In particular, $RG(q)$ is a subgraph of $RG(p)$ with probability $1$ under this construction, which of course implies that a triangle in $RG(q)$ remains a triangle in $RG(p)$. Thus, $T(p) \geq T(q)$ almost surely, and we may conclude from the theorem.
$\mathit{Corollaries \ and\ Conclusion}$
Note that the above construction doesn't just restrict itself to triangles : the occurrence of any kind of subgraph , be it a cycle of a particular length or a clique, has increased chance if the parameter increases. Therefore, we get a vast generalization of the statement you make, with little or no effort. Especially without having to go into algebraic expressions and compare them, which is cumbersome and tiring.
There are many uses of stochastic domination in graph theory, especially the edge-to-edge coupling. Stochastic domination is also used in stochastic optimal control theory to regulate random variables so that they behave in a streamlined fashion, allowing the prediction of monotone optimal policies. Last but not the least, stochastic domination is very important in percolation theory and the theory of random walks, since domination by a random variable with certain tail bounds implies that the same tail bounds carry over to the dominated random variable.