You can calculate/approximate the standard errors via the p-values. First, convert the two-sided p-values into one-sided p-values by dividing them by 2. So you get $p = .0115$ and $p = .007$. Then convert these p-values to the corresponding z-values. For $p = .0115$, this is $z = -2.273$ and for $p = .007$, this is $z = -2.457$ (they are negative, since the odds ratios are below 1). These z-values are actually the test statistics calculated by taking the log of the odds ratios divided by the corresponding standard errors (i.e., $z = log(OR) / SE$). So, it follows that $SE = log(OR) / z$, which yields $SE = 0.071$ for the first and $SE = .038$ for the second study.
Now you have everything to do a meta-analysis. I'll illustrate how you can do the computations with R, using the metafor package:
library(metafor)
yi <- log(c(.85, .91)) ### the log odds ratios
sei <- c(0.071, .038) ### the corresponding standard errors
res <- rma(yi=yi, sei=sei) ### fit a random-effects model to these data
res
Random-Effects Model (k = 2; tau^2 estimator: REML)
tau^2 (estimate of total amount of heterogeneity): 0 (SE = 0.0046)
tau (sqrt of the estimate of total heterogeneity): 0
I^2 (% of total variability due to heterogeneity): 0.00%
H^2 (total variability / within-study variance): 1.00
Test for Heterogeneity:
Q(df = 1) = 0.7174, p-val = 0.3970
Model Results:
estimate se zval pval ci.lb ci.ub
-0.1095 0.0335 -3.2683 0.0011 -0.1752 -0.0438 **
Note that the meta-analysis is done using the log odds ratios. So, $-0.1095$ is the estimated pooled log odds ratio based on these two studies. Let's convert this back to an odds ratio:
predict(res, transf=exp, digits=2)
pred se ci.lb ci.ub cr.lb cr.ub
0.90 NA 0.84 0.96 0.84 0.96
So, the pooled odds ratio is .90 with 95% CI: .84 to .96.
It depends on why they were doing so, and what additional information you might have to work with - like do you have specific cell-counts that you might calculate your own effect measures from. However, some initial thoughts:
- There's two logical groups here. Odds ratios and relative-risks, if they're being reported in any of your studies are logically grouped (in that the odds ratio is typically trying to estimate a relative risk when the actual relative risk cannot be calculated). Similarly, rate-ratios (which I'm going to assume are = incidence density ratios from a Poisson regression) and hazard ratios both deal with time data. I really wouldn't cross-compare ORs and HRs, for example.
- Relative risks and ORs: Honestly, I'd probably run parallel analyses for these two measures. If a study is a cohort or cross-sectional population design, and reports its numbers, you can calculate your own ORs from that. If you chose to do that however, I'd definitely look at heterogeneity based on study design.
- Rate Ratios and HRs: These you might be more capable of getting away with. If the rate ratio is the ratio of two rates calculated as
cases/person-time
, then that is actually a hazard ratio estimate as well. It's just a hazard ratio made under the assumption of both proportional and constant hazards. You can convert those directly to an HR, but again, I'd look at study heterogenity by which measure it reported, as rate ratios and a HR that came out of something like a Cox model are performed under different assumptions.
- When in doubt, its probably best to split up the studies and look at each sub-group. Don't necessarily look at this as a failure. If different effect measures and study designs are reporting different results, that is, all by itself, a finding. To paraphrase a professor of mine, heterogeneity that prevents the estimation of a single pooled estimate is a result worth reporting.
Best Answer
In most meta-analysis of odds ratios, the standard errors $se_i$ are based on the log odds ratios $log(OR_i)$. So, do you happen to know how your $se_i$ have been estimated (and what metric they reflect? $OR$ or $log(OR)$)? Given that the $se_i$ are based on $log(OR_i)$, then the pooled standard error (under a fixed effect model) can be easily computed. First, let's compute the weights for each effect size: $w_i = \frac{1}{se_i^2}$. Second, the pooled standard error is $se_{FEM} = \sqrt{\frac{1}{\sum w}}$. Furthermore, let $log(OR_{FEM})$ be the common effect (fixed effect model). Then, the ("pooled") 95% confidence interval is $log(OR_{FEM}) \pm 1.96 \cdot se_{FEM}$.
Update
Since BIBB kindly provided the data, I am able to run the 'full' meta-analysis in R.
References
See, e.g., Lipsey/Wilson (2001: 114)