Meta-Analysis – How to Conduct Meta-Analysis on Studies Reporting Various Ratios Like Odds, Hazards, and Rate Ratios

meta-analysisodds-ratio

I'm doing a meta-analysis of some studies that report results variously as odds ratios, hazards ratios or rate ratios (all with confidence intervals). Is there any way to combine these together/convert between them so that I can do a meta-analysis of all the studies?

Best Answer

It depends on why they were doing so, and what additional information you might have to work with - like do you have specific cell-counts that you might calculate your own effect measures from. However, some initial thoughts:

  1. There's two logical groups here. Odds ratios and relative-risks, if they're being reported in any of your studies are logically grouped (in that the odds ratio is typically trying to estimate a relative risk when the actual relative risk cannot be calculated). Similarly, rate-ratios (which I'm going to assume are = incidence density ratios from a Poisson regression) and hazard ratios both deal with time data. I really wouldn't cross-compare ORs and HRs, for example.
  2. Relative risks and ORs: Honestly, I'd probably run parallel analyses for these two measures. If a study is a cohort or cross-sectional population design, and reports its numbers, you can calculate your own ORs from that. If you chose to do that however, I'd definitely look at heterogeneity based on study design.
  3. Rate Ratios and HRs: These you might be more capable of getting away with. If the rate ratio is the ratio of two rates calculated as cases/person-time, then that is actually a hazard ratio estimate as well. It's just a hazard ratio made under the assumption of both proportional and constant hazards. You can convert those directly to an HR, but again, I'd look at study heterogenity by which measure it reported, as rate ratios and a HR that came out of something like a Cox model are performed under different assumptions.
  4. When in doubt, its probably best to split up the studies and look at each sub-group. Don't necessarily look at this as a failure. If different effect measures and study designs are reporting different results, that is, all by itself, a finding. To paraphrase a professor of mine, heterogeneity that prevents the estimation of a single pooled estimate is a result worth reporting.
Related Question