Derive a random variable’s standard error and mean with its log-normal distribution estimation

lognormal distributionstandard deviationstandard error

I am trying to do a meta-analysis, but I met some papers which estimate the key parameter (i.e., $\alpha$) by assuming its log follows a normal distribution (i.e., $\ln(\alpha)$ ~ $N(\mu, \sigma^2)$).

For example, a paper estimates the 171 individuals' $\alpha$, but only reports the estimated $\mu$ -1.327 and the $\sigma^2$ 0.806.

Also, it, in a footnote, reports the mean and standard error of $\alpha$, which is 0.367 and 0.020 respectively. Particularly, it says

The standard errors of the estimated parameters (reported in parentheses) are obtained with the Delta method

For me, it is quite clear that the 0.367 is derived as $\exp(\mu+\sigma^2/2)$ (see this blog article and Convert from log-normal distribution to normal distribution).

However, I am struggling with getting the standard error of $\alpha$:

  1. According to this blog article, the SD($\alpha$) = $\sqrt{exp(2\mu + 2\sigma^2) – exp(2\mu + sigma^2)}$, but then we have $S.E = SD/\sqrt{N} = 0.0268$.
  2. According to Convert from log-normal distribution to normal distribution, we have SD($\alpha$) = $\sqrt{[\exp(\sigma^2)-1]\exp(2\mu+\sigma^2)}$, which gives the corresponding S.E = 0.0236.

As we can see, both ways could not replicate the result in the paper. Could you please share your wisdom in this case? Thank you in advance.


PS:
As @whuber kindly points out the unclarity, I add more details below for your information:

An easy example can be: For individual choices between a sure amount option and a chance of winning more (e.g., C dollars vs chance p of winning X dollars, X>C), we want to know how the subject access money. In economics, we call it utility, and the utility function can be power one $u(x)=x^\alpha$. Based on this, we can say the subject will choose the risky option if $u(risky)-u(safe)>\epsilon$, where $\epsilon$ ~ $N(0, \sigma^2_\epsilon)$, U(risky)=$p*X^\alpha$, and U(safe)=$C^\alpha$

Thus, we can do the structural probit model as P(choose the risky one) $= \Phi(U(risky)-U(safe)/\sigma_\epsilon)$ by MLE to estimate the $\alpha$.

Above is the regular case, but sometimes scholars will make assumptions about $\alpha$ to make sure it is positive. For instance, $\ln(\alpha)$ ~ $N(\mu_\alpha, \sigma_\alpha^2)$, and this is how the post question appears.


As @Matenmakkers points out that melta method also looks at other parameters which are estimated together. I added extra info below:

There is another parameter $\gamma$ to be estimated together and both parameters are assumed to follow a bivariate log-normal distribution:

$$
\left(\begin{array}{l}
\ln \left(\alpha_i\right) \\
\ln \left(\gamma_i\right)
\end{array}\right) \sim N\left[\left(\begin{array}{c}
\mu_\alpha \\
\mu_\gamma
\end{array}\right),\left(\begin{array}{cc}
\sigma_\alpha^2 & \rho \sigma_\alpha \sigma_\gamma \\
\rho \sigma_\alpha \sigma_\gamma & \sigma_\gamma^2
\end{array}\right)\right]
$$

The estimated $\mu_\gamma = -0.735$, $\sigma_\gamma = 0.896$, and $\rho = 0.306$. Correspondingly, the paper reports the mean and standard error of $\gamma$ are $0.717$ and $0.027$ respectively.

Best Answer

I had a go at calculating it with the Delta method like they state and also don't get exactly the same result the authors get. For some transformation $g(X)$ of a random variable $X$, the (univariate ) Delta method states that

$$\sqrt{n}[g(X) - g(\mu)] \xrightarrow{D} \mathcal{N}(0, \sigma_X^2[g'(\mu)]^2) $$

In other words, the variance of $g(X)$ should be given by $\sigma_X^2[g'(\mu)]^2$. In this case call $X = ln(\alpha)$ such that $g(X) = e^X = \alpha$. Then also $g'(X) = e^X$. So

$$\sigma_{g(X)} \approx \sqrt{0.806}*e^{-1.327} \approx 0.2381545$$

Then $S.E. = \sigma_{g(X)} / \sqrt{171} \approx 0.0182121$. Slightly under the quoted 0.020.

Maybe they used the multivariate Delta method and the discrepancy is due to correlations with a different parameters being estimated, for example?


Edit: with the additional information, the result of the multivariate Delta method should be the same given that the variables are transformed independently. Let's check. In the multivariate case the approximate covariance matrix of the parameters after transformation $g(X, Y)$ should be given by

$$\bf{J(\mu)} \Sigma \bf{J(\mu)^T}$$

With $\bf{J}$ the Jacobian of the transformation, this is very similar in form to the univariate case. We call $X = ln(\alpha)$ and $Y = ln(\gamma)$ so that $g_1(X, Y) = e^X = \alpha$ and $g_2(X, Y) = e^Y = \gamma$

Now the Jacobian is

$$ \bf{J} = \begin{pmatrix} \frac{\partial g_1}{\partial X} & \frac{\partial g_1}{\partial Y} \\ \frac{\partial g_2}{\partial X} & \frac{\partial g_2}{\partial Y} \end{pmatrix} = \begin{pmatrix} e^X & 0 \\ 0 & e^Y \end{pmatrix}$$

And the original covariance is

$$\Sigma = \begin{pmatrix} 0.806 & 0.306 * \sqrt{0.806*0.803} \\ 0.306 * \sqrt{0.806*0.803} & 0.803 \end{pmatrix} = \begin{pmatrix} 0.806 & 0.246 \\ 0.246 & 0.803 \end{pmatrix}$$

So that the covariance matrix of $\alpha$ and $\gamma$ is approximated by

$$ \Sigma_{\alpha, \gamma} \approx \begin{pmatrix} e^{-1.327} & 0 \\ 0 & e^{-0.735} \end{pmatrix} \begin{pmatrix} 0.806 & 0.246 \\ 0.246 & 0.803 \end{pmatrix} \begin{pmatrix} e^{-1.327} & 0 \\ 0 & e^{-0.735} \end{pmatrix} $$

$$ = \begin{pmatrix} 0.806 (e^{-1.327})^2 & 0.246 e^{-1.327} e^{-0.735} \\ 0.246 e^{-1.327} e^{-0.735} & 0.803 (e^{-0.735})^2 \end{pmatrix} = \begin{pmatrix} 0.0567 & 0.0313 \\ 0.0313 & 0.1846 \end{pmatrix} $$

So the formula for the univariate case reappears on the diagonal. This gives standard errors of $0.0182$ and $0.0329$ respectively for $\alpha$ and $\gamma$. Still off from what the authors did, perhaps my calculation is off somewhere, or the software the authors used doesn't calculate it exactly this way (maybe higher order Delta method, for example).