First, recall that in $E[X|Y]$ we are taking the expectation with respect to $X$, and so it can be written as $E[X|Y]=E_X[X|Y]=g(Y)$ . Because it's a funciton of $Y$, it's a random variable, and hence we can take its expectation (with respect to $Y$ now). So the double expectation should be read as $E_Y[E_X[X|Y]]$.
About the intuitive meaning, there are several approaches. I like to think of the expectation as a kind of predictor/guess (indeed, it's the predictor that minimizes the mean squared error).
Suppose for example that $X, Y$ are two (positively) correlated variables, say the weigth and height of persons from a given population. The expectation of the weight $E(X)$ would be my best guess of the weight of a unknown person: I'd bet for this value, if not given more data (my uninformed bet is constant). Instead, if I know the height, I'd bet for $E(X | Y)$ : that means that for different persons I'd bet a diferent value, and my informed bet would not be constant: sometimes I'd bet more that the "uninformed bet" $E(X)$ (for tall persons) , sometime less. The natural question arises, can I say something about my informed bet in average? Well, the tower property answers: In average, you'll bet the same.
Added : I agree (ten years later) with @Did 's comment below. My notation here is misleading, an expectation is defined in itself, it makes little or no sense to specify "with respect to $Y$". In my answer here I try to clarify this, and reconcile this fact with the (many) examples where one qualifies (subscripts) the expectation (with respect of ...).
Maybe I'm not as pedagogic as Stefan, since I'll just post the answer straight forward. If there's anything you need clarified, please let me know.
For the sake of clarity, I will denote $Z = Z_N = \sum\limits_{i=1}^N X_i$. Thus $Z_n$ is just the sum of the $X_i$'s, $i=1,..,n$ where $n$ is a real number.
$$E[Z_N]= \sum\limits_{n=0}^{\infty} E[Z_n |N=n] \cdot P(N=n)\\=\sum\limits_{n=0}^{\infty} E[Z_n] \cdot P(N=n)\\=\sum\limits_{n=0}^{\infty} nE[X_i] \cdot P(N=n)\\ E[X_i]\sum\limits_{n=0}^{\infty} n \cdot P(N=n)\\=E[X_i]\cdot E[N]$$
I guess it's necessary to assume (or show) that the expected value of N is actually finite.
This can also be quite easily shown using the probability generating function.
The variance can be computed using a the law total of variance (sometimes called Decomposition), instead of the total law of expectation.
The law of total variance gives the following: for the random variables $X$ and $Y$,
$$Var(X) = E[Var(X|Y)] + Var(E[X|Y])$$
For $X=Z_N$ and $Y=N$ we obtain the following,
$$Var(Z_N) = E[Var(Z_N|N)]+Var(E[Z_N|N])$$
After some computations (I'll leave that for you), similarly to when the expectation was calculated, it can be shown that $E[Var(Z_N|N)]=E[N]\cdot Var(X)$.
Knowing that $E[Z_N|N=n]=nE[X]$, it follows that
$$Var(E[Z_N|N])=Var(N \cdot E[X]) = E[X]^2 \cdot Var(N)$$
Finally we get that $$Var(Z_N) = E[N]\cdot Var(X)+E[X]^2 \cdot Var(N)$$
Best Answer
If $Z$ is $\mathcal G-$ measurable, then $$\mathbb E[ZX\mid \mathcal G]=Z\mathbb E[X\mid \mathcal G] \ \ a.s.$$
Proof
Let $G\in \mathcal G$. Then, $$\mathbb E\big[\mathbb E[ZX\mid \mathcal G]\boldsymbol 1_G\big]=\mathbb E[XZ\boldsymbol 1_G]=\mathbb E\big[\mathbb E[X\mid \mathcal G]Z\boldsymbol 1_G\big].\tag{*}$$ The second equality comes from the fact that $Z\boldsymbol 1_G$ is $\mathcal G-$measurable. The claim follow since $(*)$ hold for all $G\in \mathcal G$.