I am trying to double check whether beta values calculated from odds ratios and beta values calculated from the reciprocal of the same odds ratios have the same p-values and standard errors (I am calculating them from same population and using logistic regression). Regarding standard errors I have already found a question on here (What is the standard error of the inverse of a known odds ratio?) but I am not sure of the reliability of the answer. I have tried to verify this using statistical software plink and it looks that p-values and SEs stay the same (or very similar, differing only after a few decimal points) for both original betas and betas calculated from reciprocal of odds ratios. If anyone has any mathematical/statistical explanation it would be great.
Solved – p-value and standard error of logistic regression betas calculated from reciprocal of odds ratio
logisticodds-ratiop-valuestandard error
Related Solutions
In most meta-analysis of odds ratios, the standard errors $se_i$ are based on the log odds ratios $log(OR_i)$. So, do you happen to know how your $se_i$ have been estimated (and what metric they reflect? $OR$ or $log(OR)$)? Given that the $se_i$ are based on $log(OR_i)$, then the pooled standard error (under a fixed effect model) can be easily computed. First, let's compute the weights for each effect size: $w_i = \frac{1}{se_i^2}$. Second, the pooled standard error is $se_{FEM} = \sqrt{\frac{1}{\sum w}}$. Furthermore, let $log(OR_{FEM})$ be the common effect (fixed effect model). Then, the ("pooled") 95% confidence interval is $log(OR_{FEM}) \pm 1.96 \cdot se_{FEM}$.
Update
Since BIBB kindly provided the data, I am able to run the 'full' meta-analysis in R.
library(meta)
or <- c(0.75, 0.85)
se <- c(0.0937, 0.1029)
logor <- log(or)
(or.fem <- metagen(logor, se, sm = "OR"))
> (or.fem <- metagen(logor, se, sm = "OR"))
OR 95%-CI %W(fixed) %W(random)
1 0.75 [0.6242; 0.9012] 54.67 54.67
2 0.85 [0.6948; 1.0399] 45.33 45.33
Number of trials combined: 2
OR 95%-CI z p.value
Fixed effect model 0.7938 [0.693; 0.9092] -3.3335 0.0009
Random effects model 0.7938 [0.693; 0.9092] -3.3335 0.0009
Quantifying heterogeneity:
tau^2 < 0.0001; H = 1; I^2 = 0%
Test of heterogeneity:
Q d.f. p.value
0.81 1 0.3685
Method: Inverse variance method
References
See, e.g., Lipsey/Wilson (2001: 114)
I did the following in Stata, the first is fixed effect and the second is random effect. I got different answers than you did.
Study | ES [95% Conf. Interval] % Weight
---------------------+---------------------------------------------------
1 | 2.700 1.800 4.000 63.47
2 | 1.300 0.500 3.400 36.53
---------------------+---------------------------------------------------
I-V pooled ES | 2.189 1.312 3.065 100.00
---------------------+---------------------------------------------------
Heterogeneity calculated by formula
Q = SIGMA_i{ (1/variance_i)*(effect_i - effect_pooled)^2 }
where variance_i = ((upper limit - lower limit)/(2*z))^2
Heterogeneity chi-squared = 2.27 (d.f. = 1) p = 0.132
I-squared (variation in ES attributable to heterogeneity) = 56.0%
Test of ES=0 : z= 4.89 p = 0.000
. metan or ll ul, effect(Odds Ratio) null(1) lcols(trialname) texts(200) random
Study | ES [95% Conf. Interval] % Weight
---------------------+---------------------------------------------------
1 | 2.700 1.800 4.000 55.93
2 | 1.300 0.500 3.400 44.07
---------------------+---------------------------------------------------
D+L pooled ES | 2.083 0.721 3.445 100.00
---------------------+---------------------------------------------------
Heterogeneity calculated by formula
Q = SIGMA_i{ (1/variance_i)*(effect_i - effect_pooled)^2 }
where variance_i = ((upper limit - lower limit)/(2*z))^2
Heterogeneity chi-squared = 2.27 (d.f. = 1) p = 0.132
I-squared (variation in ES attributable to heterogeneity) = 56.0%
Estimate of between-study variance Tau-squared = 0.5488
Test of ES=0 : z= 3.00 p = 0.003
Best Answer
I trust we agree that beta = ln(OR). If not then I have misunderstood the question.
Then -beta = -ln(OR) = ln(1/OR). But Var(-beta) = Var(beta) = Var(ln(1/OR)) so the variances of the two values of beta are the same so their standard errors are the same.
The test statistics just have different signs, z1 = beta/se(beta) and z2 = -beta/se(-beta) = -beta/se(beta) = -z1. For a two-sided test their p-values are identical.
No computation needed, just some algebra.