I have came across a paper where they stated that: "The product of independent log-normal
quantities also follows a log-normal distribution", pp-345. It also have very rich understanding of the Lognormal Distribution. You can download the Article from here:
http://stat.ethz.ch/~stahel/lognormal/bioscience.pdf
As for the Second question, If I come across any solution, i will let you know.
Let $X_i \sim_{\text{i.i.d}} \text{LNorm}(\mu,\,\sigma)$ with the
meaning that the r.v. $\log X_i$ is normal with mean
$\mu$ and standard deviation $\sigma$. Considering $M_n := \max_{1
\leqslant i \leqslant n} X_i$, we know that there exist two sequences
$a_n >0$ and $b_n$ such that
$$
\tag{1}
\frac{M_n - b_n}{a_n} \to \text{Gum}(0, 1)
$$
where $\text{Gum}(\nu,\,\beta)$ denotes the Gumbel distribution with
location $\nu$ and scale $\beta$. This means that
$F_{M_n}(a_n x + b_n) \to F_{\text{Gum}}(x;\,0,\,1)$ for all $x$.
Quite obviously the two sequences $a_n$ and $b_n$ depend on $\mu$ and
$\sigma$, so they could be denoted as $a_n(\mu,\,\sigma)$ and
$b_n(\mu,\,\sigma)$. For instance if $\mu$ is replaced by $\mu +1$
then the distribution of $X_i$ is replaced by that of $e X_i$ and the
distribution of $M_n$ is replaced by that of $e M_n$, implying that
$a_n$ and $b_n$ have to be replaced by $ea_n$ and $eb_n$ to maintain
the same limit. Similarly if we replace $\mu$ by $0$ with $\sigma$
unchanged, $X_i$ is to be replaced by $e^{-\mu} X_i$ and then
$a_n$ and $b_n$ must be replaced by $e^{-\mu} a_n$ and $e^{-\mu}b_n$.
The question can be formulated as: if we use the sequences $a_n(0, 1)$
and $b_n(0, \,1)$ at the left-hand side of (1) - instead of the due
$a_n(\mu,\,\sigma)$ and $b_n(\mu,\,\sigma)$ - do we get
$\text{Gum}(\mu,\,\sigma)$ at the right-hand side? The answer is then
no, because the parameters of the Gumbel are indeed location and scale
parameters, while this is not true for the log-normal. The parameter
$\sigma$ of the log-normal impacts the tail, as can be seen by the
fact that the coefficient of variation increases with $\sigma$. While
$\text{LNorm}(\mu,\,\sigma)$ always remains in the Gumbel domain of
attraction, the sequences $a_n$ and $b_n$ must tend to $\infty$ more
rapidly as $\sigma$ increases. It can be proved that we can in (1)
use sequences $a_n$ and $b_n$ such that $$ b_n(\mu, \sigma) = e^\mu \,
b_n(0, 1)^\sigma, \qquad a_n(\mu, \sigma) = \sigma \,(2 \log n)^{-1/2}
b_n(\mu,\,\sigma), $$ see Embrechts P., Klüppelberg C. and Mikosch
T. table 3.4.4 pp
155-157. If we use sequences $a_n$ and $b_n$ with a wrong $\sigma$, we
will not get a non-degenerate limit for the left-hand side of (1),
because the growth rates of $a_n$ and $b_n$ are then unsuitable for
the tail of $X_i$.
Best Answer
When a variable $X$ has a Normal distribution with mean $\mu$ and standard deviation $\sigma \gt 0,$ we say that $Z=e^X$ has a Lognormal$(\mu,\sigma)$ distribution.
The laws of logarithms show that $\mu$ (an additive location parameter for the Normal family of distributions) determines the scale of $Z.$ Because the skewness of a variable does not depend on its scale, we may take $\mu$ to be any convenient value. Choosing $\mu=0,$ use the Normal density (which is proportional to the exponential of $-x^2/(2\sigma^2)$) to compute the (raw) $k^\text{th}$ moment of $Z$ via the substitution $y = x - k\sigma^2:$
$$\begin{aligned} \mu_k(\sigma) &=E\left[Z^k\right] = E\left[\exp(X)^k\right] = E\left[\exp(kX)\right]\\\ &= \frac{1}{\sigma\sqrt{2\pi}}\int_{\mathbb{R}} \exp\left(\frac{1}{2\sigma^2}x^2 + kx\right)\,\mathrm{d}x\\ &= \frac{1}{\sigma\sqrt{2\pi}}\exp\left(k^2\sigma^2/2\right)\int_{\mathbb{R}} \exp\left(\frac{1}{2\sigma^2}x^2 + kx - k^2\sigma^2/2\right)\,\mathrm{d}x\\ &= \frac{1}{\sigma\sqrt{2\pi}}\exp\left(k^2\sigma^2/2\right)\int_{\mathbb{R}} \exp\left(\frac{1}{2\sigma^2}\left[x - k\sigma^2\right]^2\right)\,\mathrm{d}x\\ &= \exp\left(k^2\sigma^2/2\right)\left[\frac{1}{\sigma\sqrt{2\pi}}\int_{\mathbb{R}} \exp\left(\frac{1}{2\sigma^2}y^2\right)\,\mathrm{d}y\right]\\ &= \exp\left(k^2\sigma^2/2\right). \end{aligned}\tag{*}$$
For $k=1$ this shows the mean is $\exp(\sigma^2/2)$ and from this we may compute the central moments from the Binomial Theorem as
$$\begin{aligned} \mu^\prime_k(\sigma) &= E\left[(Z - E[Z])^k\right] = E\left[\sum_{i=0}^k \binom{k}{i} Z^i E(Z)^{k-i}\right] \\ &= \sum_{i=0}^k \binom{k}{i}(-1)^{i-k} \mu_i(\sigma) \mu_1(\sigma)^{k-i} \end{aligned}.\tag{**}$$
Applying this to $k=2,3$ gives
$$\mu^\prime_2(\sigma) = \mu_0(\sigma)\mu_1(\sigma))^2 - 2\mu_1(\sigma)\mu_1(\sigma) + \mu_2(\sigma) = e^{\sigma^2}\left(e^{\sigma^2}-1\right)$$
and
$$\begin{aligned}\mu^\prime_3(\sigma) &= -\mu_0(\sigma)\mu_1(\sigma)^3 + 3\mu_1(\sigma)\mu_1(\sigma)^2 - 3\mu_2(\sigma)\mu_1(\sigma) + \mu_3(\sigma) \\ &= e^{3\sigma^2/2}\left(2 - 3 e^{\sigma^2} + e^{3\sigma^2}\right) \\ &= e^{3\sigma^2/2}\left(e^{\sigma^2}+2\right)\left(e^{\sigma^2}-1\right)^2. \end{aligned}$$
By definition, the skewness is
Comments and Generalizations
Higher standardized central moments (e.g. the kurtosis) are readily computed in the same way: $(*)$ and $(**)$ reduce the problem to polynomial algebra (the variable is $\exp(\sigma^2/2)$).
Because $\mu$ is a scale parameter for the Lognormal family (corresponding to a scale factor of $e^\mu$), it can be introduced into the formulas $(*)$ directly, where its $k^\text{th}$ power $\left(e^\mu\right)^k = e^{k\mu}$ will multiply the result, giving the general formulas
$$\mu_k(\mu,\sigma) = E\left[Z^k\right] = \exp\left(k\mu + k^2\sigma^2\right)$$
and then, of course,
$$\mu^\prime_k(\mu,\sigma) = E\left[\right(Z - E[Z]\left)^k\right] = e^{k\mu} \mu^\prime_k(\sigma).$$