Following Sutton et al. (2000: 18f), I would suggest converting proportions $p$ to logits:
$logit = log(odds) = log(\frac{p}{1-p}).$
Using the number of cases with an event ($N_{event}$) and without an event ($N_{\neg event}$), the variance of $logit$ is given by
$Var(logit) = \frac{1}{N_{event}} + \frac{1}{N_{\neg event}}$.
Reference
Sutton, A. J., K. R. Abrams, M. Jonas, T. A. Sheldon und F. Song, 2000a: Methods for
meta-analysis in medical research. Wiley series in probability and mathematical
statistics, Chichester; New York: Wiley.
I have consulted some sources, including useful comments from The_old_man and local academics here, and there are several answers, simplest is first:
Overall, to the best of my knowledge, there is no such thing as a joint measure summarizing Odds Ratios (OR), Incidence Rate Ratios (IRR), Risk Ratios (RR), and Hazard Ratios (HR).
Some conversions are possible - OR can be converted to RR and vice versa if the prevalence of the outcome in the control group is known. This is detailed in this blog (thanks for the question, user Amorphia): https://www.r-bloggers.com/2014/01/how-to-convert-odds-ratios-to-relative-risks/
However, be aware that relative risk only makes sense in a prospective study - for example it would be numerically possible, but nonsense, to calculate relative risk in a cross-sectional study. Risk needs to take place during some time span. Odds ratio, however, can be used in a variety of study types.
Also, HRs can be approximated using various techniques. HRs are currently recommended by Cochrane as the best measure for time-to-event data. The techniques for estimates are detailed here: training.cochrane.org/handbook/current/ section 6.8 in the current version. However, be aware that this makes a lot of sense for randomized studies that are already "controlled" for covariates, but not so much in observational studies, as these techniques does not take confounding into consideration. Also, for causal purposes, be aware of the shortcomings of HR in comparison to RR, see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653612/
In comparisons, HRs and IRRs can be more or less thought of as the same thing - this is backed up by this article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653612/
If the event in question is sufficiently rare, OR, RR, IRR, and HR all approximate each other.
However, none of the conversions or comparisons can be done without either further information, or assumptions that may or may not hold. If the measure of interest is risk, the following relationship holds (Modern Epidemiology, Third Edition):
$RR < (IRR \approx HR) < OR$
And this can be used to compare the measurements as is (but of course will not allow meta-analysis).
A final option is simply to state the direction of risk (for example +: risk goes up, -: risk goes down, 0: not associated) and abandon the magnitude altogether - this holds for all measurements mentioned.
Best Answer
There are different things people call effect sizes, and what they understand by the term may depend on the scientific discipline or the background. I first encountered the term in psychology, where the most common understanding is Cohen's $d$, or $\eta^2$.
Wikipedia has a decent overview, and we also have an effect-size tag. Wikipedia specifically mentions ORs as an example of effect sizes:
I would recommend that you read through the Wikipedia page and add to your paper any other measures of effect size that make sense to you and for your analysis. In the cover letter to your resubmission, explain what you did and request that the reviewer note explicitly what kind of effect size they would like to see, in case what you did is not enough for them. You could hint that you already had ORs, perhaps like this: