In general, if $p_1, p_2, \ldots, p_m \in (0,1)$ are IID realizations from some probability distribution with mean $\mu_p$ and standard deviation $\sigma_p$, and $n_1, n_2, \ldots, n_m \in \mathbb Z^+$ are IID realizations of from another probability distribution with mean $\mu_n$ and standard deviation $\sigma_n$, and for each $i = 1, 2, \ldots, m$, we have random variables $$X_i \sim \operatorname{Binomial}(n_i, p_i),$$ and we are interested in the distribution of $S = \sum_{i=1}^m X_i$, then we have by linearity of expectation $$\operatorname{E}[S] = \sum_{i=1}^m \operatorname{E}[X_i].$$ In turn, for each $X_i$, we have by the law of total expectation $$\operatorname{E}[X_i] = \operatorname{E}[\operatorname{E}[X_i \mid (n_i \cap p_i)]] = \operatorname{E}[n_i p_i] = \operatorname{E}[n_i]\operatorname{E}[p_i] = \mu_n \mu_p;$$ thus $$\operatorname{E}[S] = m\mu_n \mu_p.$$ This assumes that $n_i$ and $p_i$ are independent for each $i$ (from which it follows that each $X_i$ is independent). The variance calculation is done in a similar fashion; $$\operatorname{Var}[S] \overset{\text{ind}}{=} \sum_{i=1}^m \operatorname{Var}[X_i],$$ whence by the law of total variance
$$\begin{align*}
\operatorname{Var}[X_i]
&= \operatorname{Var}[\operatorname{E}[X_i \mid (n_i \cap p_i)]] + \operatorname{E}[\operatorname{Var}[X_i \mid (n_i \cap p_i)]] \\
&= \operatorname{Var}[n_i p_i] + \operatorname{E}[n_i p_i (1-p_i)] \\
&= (\sigma_n^2 \sigma_p^2 + \sigma_n^2 \mu_p^2 + \sigma_p^2 \mu_n^2) + \mu_n \operatorname{E}[p_i(1-p_i)] \\
&= (\sigma_n^2 \sigma_p^2 + \sigma_n^2 \mu_p^2 + \sigma_p^2 \mu_n^2) + \mu_n (\mu_p - (\sigma_p^2 + \mu_p^2)).
\end{align*}$$
To understand the variance of $n_i p_i$, note that for two independent random variables $A$, $B$, with means and standard deviations $\mu_A, \sigma_A, \mu_B, \sigma_B$, respectively,
$$\begin{align*}\operatorname{Var}[AB]
&= \operatorname{E}[(AB)^2] - \operatorname{E}[AB]^2 \\
&= \operatorname{E}[A^2 B^2] - \operatorname{E}[A]^2 \operatorname{E}[B]^2 \\
&= \operatorname{E}[A^2]\operatorname{E}[B^2] - \mu_A^2 \mu_B^2 \\
&= (\operatorname{Var}[A] + \operatorname{E}[A]^2)(\operatorname{Var}[B] + \operatorname{E}[B]^2) - \mu_A^2 \mu_B^2 \\
&= (\sigma_A^2 + \mu_A^2)(\sigma_B^2 + \mu_B^2) - \mu_A^2 \mu_B^2 \\
&= \sigma_A^2 \sigma_B^2 + \sigma_A^2 \mu_B^2 + \sigma_B^2 \mu_A^2. \end{align*}$$
Note that my computation of the variance differs from yours. I have substantiated my results by simulating $m = 10^6$ observations from $X_i$ where $n_i \sim \operatorname{Poisson}(\lambda)$ and $p_i \sim \operatorname{Beta}(a,b)$, for $\lambda = 11$ and $(a,b) = (7,3)$. This should result in $\operatorname{Var}[X_i] = 1001/100$; your results do not match. I should also point out that the reason that your computation does not work is because the total variance of each $X_i$ is not merely due to the expectation of the conditional variance of $X_i$ given $n_i$ and $p_i$; the other term in the law of total variance must also be included, which captures the variability of the conditional expectation of $X_i$. In other words, there is variation in $X_i$ coming from the binomial variance even when $n_i$ and $p_i$ are fixed, but there is also additional variation in the location of $X_i$ arising from the fact that $n_i$ and $p_i$ are not fixed.
Independence of $X$ and $Y$ implies $$\operatorname{E}[Z] = \operatorname{E}[\sin X] \operatorname{E}[\cos Y] = 0,$$ as you already observed.
The variance is more complicated. You'd need to compute $$\operatorname{Var}[Z] = \operatorname{E}[Z^2] - \operatorname{E}[Z]^2 = \operatorname{E}[Z^2] = \operatorname{E}[\sin^2 X \cos^2 Y] = \operatorname{E}[\sin^2 X] \operatorname{E}[\cos^2 Y].$$
To this end, recall that $$\cos^2 \theta = \frac{1 + \cos 2\theta}{2}, \quad \sin^2 \theta = \frac{1 - \cos 2\theta}{2}.$$ So for instance, $$\operatorname{E}[\sin^2 X] = \frac{1 - \operatorname{E}[\cos 2X]}{2} = \frac{1 - e^{-2\sigma_X^2}}{2},$$ since $2X \sim \operatorname{Normal}(0, 4 \sigma_X^2)$. A similar result holds for $\operatorname{E}[\cos^2 Y]$.
All that is left is to demonstrate that $\operatorname{E}[\cos X] = e^{-\sigma_X^2/2}$, which you state without proof. (The fact that $\operatorname{E}[\sin X] = 0$ is an immediate consequence of the fact that $|\sin X| \le 1$ and $\sin (-X) = -\sin X$.) I recommend that you try this as an exercise.
Best Answer
$\begin{aligned}\mathsf{Var}XY\left(1-Y\right) & =\mathsf{Covar}\left(XY\left(1-Y\right),XY\left(1-Y\right)\right)\\ & =\mathsf{Covar}\left(XY-XY^{2},XY-XY^{2}\right)\\ & =\mathsf{Var}\left(XY\right)-2\mathsf{Covar}\left(XY^{2},XY\right)+\mathsf{Var}\left(XY^{2}\right)\\ & =\left[\mathbb{E}X^{2}Y^{2}-\left(\mathbb{E}XY\right)^{2}\right]-2\left[\mathbb{E}X^{2}Y^{3}-\mathbb{E}XY^{2}\mathbb{E}XY\right]+\left[\mathbb{E}X^{2}Y^{4}-\left(\mathbb{E}XY^{2}\right)^{2}\right]\\ & =\left[\mathbb{E}X^{2}\mathbb{E}Y^{2}-\left(\mathbb{E}X\mathbb{E}Y\right)^{2}\right]-2\left[\mathbb{E}X^{2}\mathbb{E}Y^{3}-\mathbb{E}X\mathbb{E}Y^{2}\mathbb{E}X\mathbb{E}Y\right]+\left[\mathbb{E}X^{2}\mathbb{E}Y^{4}-\left(\mathbb{E}X\mathbb{E}Y^{2}\right)^{2}\right]\\ & =\left[\mathbb{E}X^{2}\mathbb{E}Y^{2}-\mu_{X}^{2}\mu_{Y}^{2}\right]-2\left[\mathbb{E}X^{2}\mathbb{E}Y^{3}-\mu_{X}^{2}\mu_{Y}\mathbb{E}Y^{2}\right]+\left[\mathbb{E}X^{2}\mathbb{E}Y^{4}-\mu_{X}^{2}\left(\mathbb{E}Y^{2}\right)^{2}\right]\\ & =\left[\left(\sigma_{X}^{2}+\mu_{X}^{2}\right)\left(\sigma_{Y}^{2}+\mu_{Y}^{2}\right)-\mu_{X}^{2}\mu_{Y}^{2}\right]-2\left[\left(\sigma_{X}^{2}+\mu_{X}^{2}\right)\mathbb{E}Y^{3}-\mu_{X}^{2}\mu_{Y}\left(\sigma_{Y}^{2}+\mu_{Y}^{2}\right)\right]+\left[\left(\sigma_{X}^{2}+\mu_{X}^{2}\right)\mathbb{E}Y^{4}-\mu_{X}^{2}\left(\sigma_{Y}^{2}+\mu_{Y}^{2}\right)^{2}\right]\\ & =\left[\sigma_{X}^{2}\sigma_{Y}^{2}+\sigma_{X}^{2}\mu_{Y}^{2}+\sigma_{Y}^{2}\mu_{X}^{2}\right]-2\left[\left(\sigma_{X}^{2}+\mu_{X}^{2}\right)\mathbb{E}Y^{3}-\mu_{X}^{2}\mu_{Y}\left(\sigma_{Y}^{2}+\mu_{Y}^{2}\right)\right]+\left[\left(\sigma_{X}^{2}+\mu_{X}^{2}\right)\mathbb{E}Y^{4}-\mu_{X}^{2}\left(\sigma_{Y}^{2}+\mu_{Y}^{2}\right)^{2}\right] \end{aligned} $
Especially the $5$-th equality rests on independence of $X$ and $Y$.
We cannot get any further since for the expressions $\mathbb EY^3$ and $\mathbb EY^4$ more information is needed concerning the distribution of $Y$.
edit:
Special case: $Y\sim\mathsf{Normal}\left(\mu_{Y},\sigma_{Y}^{2}\right)$.
Then $Y=\mu_{Y}+\sigma_{Y}U$ where $U\sim\mathsf{Normal}\left(0,1\right)$ and based on the knowledge that $\mathbb{E}U^{3}=0$ and $\mathbb{E}U^{4}=3$ we find:
Also have a look here for that.
Substituting we find the following expression for $\mathsf{Var}XY\left(1-Y\right)$:
$$\left[\sigma_{X}^{2}\sigma_{Y}^{2}+\sigma_{X}^{2}\mu_{Y}^{2}+\sigma_{Y}^{2}\mu_{X}^{2}\right]-2\left[\sigma_{X}^{2}\mu_{Y}^{3}+3\sigma_{X}^{2}\sigma_{Y}^{2}\mu_{Y}+2\sigma_{Y}^{2}\mu_{X}^{2}\mu_{Y}\right]+\left[\sigma_{X}^{2}\mu_{Y}^{4}+6\sigma_{X}^{2}\sigma_{Y}^{2}\mu_{Y}^{2}+3\sigma_{X}^{2}\sigma_{Y}^{4}+4\sigma_{Y}^{2}\mu_{X}^{2}\mu_{Y}^{2}+2\sigma_{Y}^{4}\mu_{X}^{2}\right]$$
I hope I did not make any mistakes (check me on it).