In general, if $p_1, p_2, \ldots, p_m \in (0,1)$ are IID realizations from some probability distribution with mean $\mu_p$ and standard deviation $\sigma_p$, and $n_1, n_2, \ldots, n_m \in \mathbb Z^+$ are IID realizations of from another probability distribution with mean $\mu_n$ and standard deviation $\sigma_n$, and for each $i = 1, 2, \ldots, m$, we have random variables $$X_i \sim \operatorname{Binomial}(n_i, p_i),$$ and we are interested in the distribution of $S = \sum_{i=1}^m X_i$, then we have by linearity of expectation $$\operatorname{E}[S] = \sum_{i=1}^m \operatorname{E}[X_i].$$ In turn, for each $X_i$, we have by the law of total expectation $$\operatorname{E}[X_i] = \operatorname{E}[\operatorname{E}[X_i \mid (n_i \cap p_i)]] = \operatorname{E}[n_i p_i] = \operatorname{E}[n_i]\operatorname{E}[p_i] = \mu_n \mu_p;$$ thus $$\operatorname{E}[S] = m\mu_n \mu_p.$$ This assumes that $n_i$ and $p_i$ are independent for each $i$ (from which it follows that each $X_i$ is independent). The variance calculation is done in a similar fashion; $$\operatorname{Var}[S] \overset{\text{ind}}{=} \sum_{i=1}^m \operatorname{Var}[X_i],$$ whence by the law of total variance
$$\begin{align*}
\operatorname{Var}[X_i]
&= \operatorname{Var}[\operatorname{E}[X_i \mid (n_i \cap p_i)]] + \operatorname{E}[\operatorname{Var}[X_i \mid (n_i \cap p_i)]] \\
&= \operatorname{Var}[n_i p_i] + \operatorname{E}[n_i p_i (1-p_i)] \\
&= (\sigma_n^2 \sigma_p^2 + \sigma_n^2 \mu_p^2 + \sigma_p^2 \mu_n^2) + \mu_n \operatorname{E}[p_i(1-p_i)] \\
&= (\sigma_n^2 \sigma_p^2 + \sigma_n^2 \mu_p^2 + \sigma_p^2 \mu_n^2) + \mu_n (\mu_p - (\sigma_p^2 + \mu_p^2)).
\end{align*}$$
To understand the variance of $n_i p_i$, note that for two independent random variables $A$, $B$, with means and standard deviations $\mu_A, \sigma_A, \mu_B, \sigma_B$, respectively,
$$\begin{align*}\operatorname{Var}[AB]
&= \operatorname{E}[(AB)^2] - \operatorname{E}[AB]^2 \\
&= \operatorname{E}[A^2 B^2] - \operatorname{E}[A]^2 \operatorname{E}[B]^2 \\
&= \operatorname{E}[A^2]\operatorname{E}[B^2] - \mu_A^2 \mu_B^2 \\
&= (\operatorname{Var}[A] + \operatorname{E}[A]^2)(\operatorname{Var}[B] + \operatorname{E}[B]^2) - \mu_A^2 \mu_B^2 \\
&= (\sigma_A^2 + \mu_A^2)(\sigma_B^2 + \mu_B^2) - \mu_A^2 \mu_B^2 \\
&= \sigma_A^2 \sigma_B^2 + \sigma_A^2 \mu_B^2 + \sigma_B^2 \mu_A^2. \end{align*}$$
Note that my computation of the variance differs from yours. I have substantiated my results by simulating $m = 10^6$ observations from $X_i$ where $n_i \sim \operatorname{Poisson}(\lambda)$ and $p_i \sim \operatorname{Beta}(a,b)$, for $\lambda = 11$ and $(a,b) = (7,3)$. This should result in $\operatorname{Var}[X_i] = 1001/100$; your results do not match. I should also point out that the reason that your computation does not work is because the total variance of each $X_i$ is not merely due to the expectation of the conditional variance of $X_i$ given $n_i$ and $p_i$; the other term in the law of total variance must also be included, which captures the variability of the conditional expectation of $X_i$. In other words, there is variation in $X_i$ coming from the binomial variance even when $n_i$ and $p_i$ are fixed, but there is also additional variation in the location of $X_i$ arising from the fact that $n_i$ and $p_i$ are not fixed.
First observe that
$(N|M=m)\sim Bin(m;p)$
To calculate $\mathbb{E}[N]=\lambda p$ you did
$$\mathbb{E}[N]=\mathbb{E}[\mathbb{E}(N|M=m)]=\mathbb{E}[mp]=p\mathbb{E}[M]=\lambda p$$
Using the fact that $\mathbb{V}(X)=\mathbb{E}[X^2]-\mathbb{E}^2[X]$ you can easily proceed in the same way
$$\mathbb{V}[N]=\mathbb{E}[N^2]-\mathbb{E}^2[N]=\mathbb{E}[\mathbb{E}(N^2|M=m)]-\lambda^2p^2=...=p(1-p)\lambda+\lambda p^2$$
.... Your intuition is right!
I'm also interested in the case where the distribution of M is itself binomial.
I hope the procedure I showed is clear... if yes you can use it in any case...
Edit: law of N
$$\mathbb{P}[N=k]=\sum_{n=k}^{\infty}\binom{n}{k}p^k(1-p)^{n-k}\frac{e^{-\lambda}\lambda^n}{n!}=$$
$$=\frac{(\lambda p)^k}{k!}e^{-\lambda}\sum_{(n-k)=0}^{\infty}\frac{[(1-p)\lambda]^{n-k}}{(n-k)!}= \frac{(\lambda p)^k}{k!}e^{-\lambda}e^{(1-p)\lambda}=\frac{(\lambda p)^k e^{-\lambda p}}{k!}$$
$k=0,1,2,..$
Best Answer
If you consider for $n=0,1,2,3$ so $2^n=1,2,4,8$, you get
which suggest a mean of $\dfrac{1}{2^n}$ and variance of $\dfrac{2^n-1}{2^{3n-1}}$
and if we let $m=2^n$ then they suggest a mean of $\dfrac{1}{m}$ and variance of $\dfrac{2}{m^2}-\dfrac{2}{m^3}$.
As for a proof, consider $q$ as a binomial random variable with parameters $m$ and $\frac12$:
$q$ has mean $\frac{m}{2}$ and variance $\frac{m}{4}$ and excess kurtosis $-\frac{2}{m}$,
so $\mathbb E\left[\frac2m q-1\right]=0$ and thus, with $p=\left(\frac2m q-1\right)^2$, you can say
$\mathbb E[p]=\mathbb E\left[\left(\frac2m q-1\right)^2\right]=\text{Var}\left(\frac2m q-1\right)=\text{Var}\left(\frac2m q\right)=\left(\frac2m\right)^2\frac{m}{4}=\frac{1}{m},$
$\mathbb E[p^2]=\mathbb E\left[\left(\frac2m q-1\right)^4\right]=\left(\frac2m\right)^4\left(3-\frac{2}{m}\right)\left(\frac{m}{4}\right)^2=\frac{3}{m^2}-\frac2{m^3},$
$\text{Var}(p)= \left(\frac{3}{m^2}-\frac2{m^3}\right) -\left(\frac1m\right)^2 = \frac{2}{m^2}-\frac2{m^3}.\qquad \square$