Well, $X_1 = 1$ only when $X_2 = X_3 = X_4 = 1$ and is $0$ otherwise, therefore
$$E(X_1) = P(X_2 = 1, X_3 = 1, X_4 = 1)$$
As @leonbloy mentions, knowledge of the correlations and marginal success probabilities is not sufficient for calculating $E(X_1)$, but it can be written in terms of the conditional probabilities; using the definition of conditional probability,
$$ E(X_1) = P(X_2 = 1, X_3 = 1 | X_4 = 1) \cdot P(X_4 = 1) $$
and $P(X_2 = 1, X_3 = 1 | X_4 = 1)$ can be similarly decomposed as
$$ P(X_2 = 1 | X_3 = 1, X_4 = 1) \cdot P(X_3 = 1 | X_4 = 1) $$
implying
$$E(X_1) = P(X_2 = 1 | X_3 =1, X_4 = 1) \cdot P(X_3 =1 | X_4 = 1) \cdot P(X_4 = 1)$$
Explicit calculation of $E(X_1)$ will require more information about the joint distribution of $(X_2,X_3,X_4)$. The above expression makes sense intuitively - the probability that three dependent Bernoulli trials are successes is the probability that the first is a success, and the second one is a success given the first, and the third is a success given that the first two are. You could equivalently interchange the roles of $X_2, X_3, X_4$.
The easy way is to use the law of total variance:
$$\text{Var}(S) = E_N\left[\text{Var}(S|N)\right] + \text{Var}_N\left[E(S|N)\right] =\text{E}_N\left[N\cdot \text{Var}(X)\right] + \text{Var}_N\left[N\cdot\text{E}(X)\right]$$
Can you do it from there? It's pretty much just substitution (well, that and really basic properties of expectation and variance).
(The first part is even more straightforward using the law of total expectation.)
--
As Spy_Lord notes, the answer is $\text{E}(N)\cdot \text{Var}(X) + \text{Var}(N)\cdot\text{E}(X)^2$
Alternative approach is to evaluate $E(S_N^2)$. Following the approach you seem to be aiming at:
\begin{eqnarray}
E(S_N^2) &=& \sum_r E(S_N^2|N=r) p_r\\
&=& \sum_r (r\sigma_2^2+r^2 \mu_2^2) p_r\\
&=& \sigma_2^2\sum_r rp_r+\mu_2^2\sum_rr^2 p_r \\
&=& \sigma_2^2 \text{E}N+\mu_2^2\text{E}(N^2)
\end{eqnarray}
and I assume you can do it from there.
However, to be honest, I think this way is easier (it's actually the same approach, you just don't need to sum over all the mutually exclusive events that way). The law of total expectation says $\text{E}(X) = \text{E}_Y[\text{E}_{X|Y}(X|Y)]$, so
\begin{eqnarray}
\text{E}(S^2_N) &=& \text{E}_N[\text{E}(S^2_N|N)]\\
&=& \text{E}_N[N\sigma_2^2+N^2\mu_2^2]\\
&=& \sigma_2^2\text{E}(N)+\mu_2^2\text{E}(N^2)
\end{eqnarray}
Best Answer
Yes, there is a well-known result. Based on your edit, we can focus first on individual entries of the array $E[x_1 x_2^T]$. Such an entry is the product of two variables of zero mean and finite variances, say $\sigma_1^2$ and $\sigma_2^2$. The Cauchy-Schwarz Inequality implies the absolute value of the expectation of the product cannot exceed $|\sigma_1 \sigma_2|$. In fact, every value in the interval $[-|\sigma_1 \sigma_2|, |\sigma_1 \sigma_2|]$ is possible because it arises for some binormal distribution. Therefore, the $i,j$ entry of $E[x_1 x_2^T]$ must be less than or equal to $\sqrt{\Sigma_{1_{i,i}} \Sigma_{2_{j,j}}}$ in absolute value.
If we now assume all variables are normal and that $(x_1; x_2)$ is multinormal, there will be further restrictions because the covariance matrix of $(x_1; x_2)$ must be positive semidefinite. Rather than belabor the point, I will illustrate. Suppose $x_1$ has two components $x$ and $y$ and that $x_2$ has one component $z$. Let $x$ and $y$ have unit variance and correlation $\rho$ (thus specifying $\Sigma_1$) and suppose $z$ has unit variance ($\Sigma_2$). Let the expectation of $x z$ be $\alpha$ and that of $y z$ be $\beta$. We have established that $|\alpha| \le 1$ and $|\beta| \le 1$. However, not all combinations are possible: at a minimum, the determinant of the covariance matrix of $(x_1; x_2)$ cannot be negative. This imposes the non-trivial condition
$$1-\alpha ^2-\beta ^2+2 \alpha \beta \rho -\rho ^2 \ge 0.$$
For any $-1 \lt \rho \lt 1$ this is an ellipse (along with its interior) inscribed within the $\alpha, \beta$ square $[-1, 1] \times [-1, 1]$.
To obtain further restrictions, additional assumptions about the variables are necessary.
Plot of the permissible region $(\rho, \alpha, \beta)$