Solved – Calculating standard deviation from log-normal distribution confidence intervals

confidence intervallognormal distributionstandard deviation

I have the results of a meta-analysis of 10 studies that reports a combined random effects odds ratio (computed using Woolf's method) and 95% confidence interval of an event happening in one group relative to another:

$OR = 7.1\ (95\%\ CI\ 4.4-11.7)$

I'm now building a model that needs to sample around this odds ratio (for the purposes of a probabilistic sensitivity analysis). Given that it's an odds ratio, I'm assuming that it's log-normally distributed and that 7.1 is the mean, but what's the best way to convert the confidence interval to a standard deviation so I can sample the distribution using Excel's LOGNORMDIST function?

(I've found similar questions for the normal and gamma distributions (From confidence interval to standard deviation – what am I missing? and How to calculate mean and standard deviation in R given confidence interval and a normal or gamma distribution?) and also questions calculating the confidence interval for a log-normal distribution (How do I calculate a confidence interval for the mean of a log-normal data set?), but I can't seem to find how to go the other way round.)

Best Answer

I've solved this as follows:

$$SD = \frac{\left(\frac{\ln(\text{OR})-\ln(\text{Lower CI bound})}{1.96}\right)}{\sqrt n}$$

This represents the difference in the $\ln$ of the mean and lower confidence interval bound (which gives the error), divided by 1.96 (which gives the standard error), divided by $\sqrt{n}$ (which gives the standard deviation).

Since the meta-analysis didn't make use of patient-level and just combined studies using assumptions of random effects, $n$ was simply the number of studies (10 in this case).