Let $X_i \sim_{\text{i.i.d}} \text{LNorm}(\mu,\,\sigma)$ with the
meaning that the r.v. $\log X_i$ is normal with mean
$\mu$ and standard deviation $\sigma$. Considering $M_n := \max_{1
\leqslant i \leqslant n} X_i$, we know that there exist two sequences
$a_n >0$ and $b_n$ such that
$$
\tag{1}
\frac{M_n - b_n}{a_n} \to \text{Gum}(0, 1)
$$
where $\text{Gum}(\nu,\,\beta)$ denotes the Gumbel distribution with
location $\nu$ and scale $\beta$. This means that
$F_{M_n}(a_n x + b_n) \to F_{\text{Gum}}(x;\,0,\,1)$ for all $x$.
Quite obviously the two sequences $a_n$ and $b_n$ depend on $\mu$ and
$\sigma$, so they could be denoted as $a_n(\mu,\,\sigma)$ and
$b_n(\mu,\,\sigma)$. For instance if $\mu$ is replaced by $\mu +1$
then the distribution of $X_i$ is replaced by that of $e X_i$ and the
distribution of $M_n$ is replaced by that of $e M_n$, implying that
$a_n$ and $b_n$ have to be replaced by $ea_n$ and $eb_n$ to maintain
the same limit. Similarly if we replace $\mu$ by $0$ with $\sigma$
unchanged, $X_i$ is to be replaced by $e^{-\mu} X_i$ and then
$a_n$ and $b_n$ must be replaced by $e^{-\mu} a_n$ and $e^{-\mu}b_n$.
The question can be formulated as: if we use the sequences $a_n(0, 1)$
and $b_n(0, \,1)$ at the left-hand side of (1) - instead of the due
$a_n(\mu,\,\sigma)$ and $b_n(\mu,\,\sigma)$ - do we get
$\text{Gum}(\mu,\,\sigma)$ at the right-hand side? The answer is then
no, because the parameters of the Gumbel are indeed location and scale
parameters, while this is not true for the log-normal. The parameter
$\sigma$ of the log-normal impacts the tail, as can be seen by the
fact that the coefficient of variation increases with $\sigma$. While
$\text{LNorm}(\mu,\,\sigma)$ always remains in the Gumbel domain of
attraction, the sequences $a_n$ and $b_n$ must tend to $\infty$ more
rapidly as $\sigma$ increases. It can be proved that we can in (1)
use sequences $a_n$ and $b_n$ such that $$ b_n(\mu, \sigma) = e^\mu \,
b_n(0, 1)^\sigma, \qquad a_n(\mu, \sigma) = \sigma \,(2 \log n)^{-1/2}
b_n(\mu,\,\sigma), $$ see Embrechts P., Klüppelberg C. and Mikosch
T. table 3.4.4 pp
155-157. If we use sequences $a_n$ and $b_n$ with a wrong $\sigma$, we
will not get a non-degenerate limit for the left-hand side of (1),
because the growth rates of $a_n$ and $b_n$ are then unsuitable for
the tail of $X_i$.
$\text{Cov}(X_1,X_2)=E(X_1X_2)-E(X_1)E(X_2)$
$E(X_1X_2)=E(e^{Y_1+Y_2})$
Now the distribution of $Y_1+Y_2$ is normal (and straightforward), so $E(e^{Y_1+Y_2})$ is just the expectation of a univariate lognormal.
The $E(X_1)E(X_2)$ term you can already do.
As a result, it's straightforward to write $\text{Cov}(X_1,X_2)$ in terms of $\mu,\sigma$ and $\rho$ and thereby to solve for $\rho$.
$Y_1+Y_2\sim N(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)$, so
$e^{Y_1+Y_2}\sim logN(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)$,
which has expectation $\exp[\mu_1+\mu_2+\frac{1}{2}(\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)]$.
$E(X_i)=\exp(\mu_i+\frac{1}{2}\sigma_i^2)$
So $\text{Cov}(X_1,X_2)=E(X_1)E(X_2)[\exp(\rho\sigma_1\sigma_2)-1]$
And hence:
$\exp(\rho\sigma_1\sigma_2)-1=\frac{\text{Cov}(X_1,X_2)}{E(X_1)E(X_2)}$
$\rho=\log(\frac{\text{Cov}(X_1,X_2)}{E(X_1)E(X_2)}+1)\cdot\frac{1}{\sigma_1\sigma_2}$
You can extend this approach to calculating $\rho_{ij}$ from $\text{Cov}(X_i,X_j)$ and the other quantities.
However, if you're trying to do this to estimate parameters from a sample, using sample moments of a lognormal to do parameter estimation (i.e. method-of-moments) doesn't always perform all that well. (You might consider MLE if you can.)
Best Answer
Some basic facts about the Normal distribution help with this.
Background on the Normal distribution
The first fact is that any Normal random variable $Y$ with mean $\mu$ and standard deviation $\sigma$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard Normal variable (that is, it has zero mean and unit s.d.).
The second fact is that when $(Y_1, Y_2)$ have a bivariate Normal distribution, then any linear combination $U = \alpha Y_1 + \beta Y_2$ has a Normal distribution. We may determine exactly which distribution that is by computing the mean and variance of $U$ using the usual rules,
$$E[U] = \alpha E[Y_1] + \beta E[Y_2]$$
and
$$\operatorname{Var}(U) = \alpha^2\operatorname{Var}(Y_1) + \beta^2\operatorname{Var}(Y_2) + 2\alpha\beta\operatorname{Cov}(Y_1,Y_2).$$
The third fact is that the density function of a standard Normal variable $Z$ at the value $z$ is proportional to $C\exp(-z^2/2)$ for a universal constant $C$ (whose value we don't need to know).
Because this is a density function, it integrates to unity. By a simple change of variable $z \to \alpha z + \beta$ ($\alpha \ne 0$) we can compute a host of related integrals:
$$1 = \int_{\mathbb R}C e^{-z^2/2}\,\mathrm{d}z = C\int_{\mathbb R} e^{-(\alpha z + \beta)^2/2}\,\mathrm{d}(\alpha z + \beta) = |\alpha| e^{-\beta^2/2} C \int_{\mathbb R} e^{-\alpha^2 z^2/2 - \beta z}\,\mathrm{d}z$$
which is equivalent to our fourth (and final) fact,
$$\frac{e^{\beta^2/2}}{|\alpha| } = C\int_{\mathbb R} e^{-\alpha^2 z^2/2 - \beta z}\,\mathrm{d}z.$$
Lognormal distributions
Suppose, then, that $(X_1,X_2)$ has a bivariate Normal distribution with means $\mu_i,$ standard deviations $\sigma_i,$ and covariance $\sigma_{12}=\rho\sigma_1\sigma_2$ (thus, $\rho$ is the correlation coefficient). By definition, $(X_1,X_2) = (e^{Y_1}, e^{Y_2})$ has a bivariate Lognormal distribution. Let's compute some of its moments.
The raw moments of any order $k$ are evaluated from the fourth fact as
$$E\left[X_i^k\right] = E\left[ \left(e^{Y_i}\right)^k\right] = E\left[e^{k Y_i}\right] = E\left[e^{k(\sigma_i Z_i + \mu_i)}\right] = E\left[e^{(k\sigma_i)Z_i + k\mu_i}\right] = e^{k\mu_i + (k\sigma_i)^2/2}$$
and the mixed raw moments of orders $(j,k)$ as
$$E\left[X_1^j X_2^k\right] = E\left[ \left(e^{Y_1}\right)^j \left(e^{Y_2}\right)^k\right] = E\left[e^{j Y_1 + k Y_2}\right] = e^{j\mu_1 + k\mu_2} e^{(j^2\sigma_1^2 + k^2\sigma_2^2 + 2jk\rho\sigma_1\sigma_2)/2}.$$
The last equality follows from the variance formula in the third fact, as applied to the linear combination $jY_1 + kY_2.$
Consequently, the variances and covariances are
$$S_i^2=\operatorname{Var}(X_i) = E[X_i^2] - E[X_i]^2 = e^{2\mu_i + (2\sigma_i)^2/2} - \left(e^{\mu_i + \sigma_i^2/2}\right)^2 = e^{2\mu_i + \sigma_i^2}\left(e^{\sigma_i^2}-1\right)$$
and, with similar calculations,
$$S_{12}=\operatorname{Cov}(X_1, X_2) = E[X_1X_2] - E[X_1]E[X_2] = \cdots = e^{\mu_1+\mu_2 + \sigma_1^2/2 + \sigma_2^2/2}\left(e^{\rho\sigma_1\sigma_2} - 1\right).$$
By definition, the correlation is
$$R_{12}=\operatorname{Cor}(X_1,X_2) = \frac{S_{12}}{S_1S_2} = \frac{e^{\rho\sigma_1\sigma_1} - 1}{\sqrt{(e^{\sigma_1^2} -1 )(e^{\sigma_2^2}-1)}}.$$
Answering the question
The question is tantamount to asking how to recover the covariance parameter, $\sigma_{12} = \operatorname{Cov}(Y_1,Y_2)$ in terms of the correlation and other moments of the lognormally distributed variables $(X_1,X_2).$ Writing $$M_i = E[X_i] = e^{\mu_i + \sigma_i^2/2}$$ for the expectations, easy algebra gives
$$e^{\sigma_i^2} = 1 + \frac{S_i^2}{M_i^2},$$
whence
$$\sigma_i = \sqrt{\log \left(1 + \frac{S_i^2}{M_i^2}\right)};$$
and
$$e^{\rho \sigma_1 \sigma_2} = 1 + R_{12} \frac{S_1S_2}{M_1M_2},$$
entailing
The formula proposed in the question appears to be in some kind of mixed form where the Normal parameters appear on both sides. The closest I can come retains $\rho$ in the foregoing equation and re-expresses the $\sigma_i$ in terms of the moments of the $X_i$ to write
$$\sigma_{12} = \rho \sqrt{\log \left(1 + \frac{S_1^2}{M_1^2}\right)\,\log \left(1 + \frac{S_2^2}{M_2^2}\right)}.$$