Intuition for why mean of lognormal distribution depends on variance of normally distributed rv

intuitionlognormal distributionmeannormal distributionvariance

Let $X\sim\mathcal{N}(\mu,\sigma^2)$, which is a normal distribution. Then, $\text{exp}(X)\sim\text{Lognormal}(\mu,\sigma^2)$, and its mean is

$$
\mathbb{E}[\text{exp}(X)]=\text{exp}\left(\mu+\dfrac{\sigma^2}{2}\right)
$$

where the two parameters $\mu$ and $\sigma^2$ are carried over from the normal distribution.

I am looking for an intuitive explanation for why the mean of the log-normal distribution positively depends on $\sigma^2$.

Here is my attemp. Lower values of $X$ are "squashed", and conversely higher values of $X$ are "dispersed" in $\text{exp}(X)$.

But there must be more than this (e.g. why is $\sigma^2$ halved?).

Best Answer

The intuition for this result comes from the fact that the exponential function is a strictly convex function. When you then impose a convex transformation on the random variable $X$, the positive deviations from the mean are enlarged and the negative deviations from the mean are reduced. Consequently, there is a positive shift in the mean of the transformed random variable. This result is closely related to Jensen's inequality, which holds that if we have any convex function $\varphi$ and random variable $X$ then we have:

$$\mathbb{E}(\varphi(X)) - \varphi(\mathbb{E}(X)) \geqslant 0.$$

In the present case you have a stricly convex transformation and an underlying symmetric random variable, which is sufficient to give strict inequality in the above statement. The basic intuition is the same as for the broader application of Jensen's inequality. As to the specific form of how $\sigma^2$ enters the formula for the mean, that is something that can only really be understood by looking at the relevant derivation of the expected value of a log-normal random variable.