With an estimate of the log odds ratio $\hat\omega$ & its standard error $\hat\sigma_{\hat\omega}$ you can use the delta method to get an approximation to the standard error of the odds ratio estimate $\newcommand{\e}{\mathrm{e}}\e^\hat\omega$:
$$\newcommand{\Var}{\operatorname{Var}}
\newcommand{\dif}{\mathrm{d}}
\begin{align}
\sqrt{\Var \e^{\hat\omega}} & \approx \sqrt{\left(\left.\frac{\dif \e^x}{\dif x}\right|_\hat\omega\right)^2 \Var \hat\omega }\\
& = \e^{\hat\omega} \hat\sigma_{\hat\omega}
\end{align}$$
(That's assuming your estimate of the log odds ratio is consistent—i.e. it would tend to the true (population) value as sample size increased.)
Zhang 1998 originally presented a method for calculating CIs for risk ratios suggesting you could use the lower and upper bounds of the CI for the odds ratio.
This method does not work, it is biased and generally produces anticonservative (too tight) estimates of the risk ratio 95% CI. This is because of the correlation between the intercept term and the slope term as you correctly allude to. If the odds ratio tends towards its lower value in the CI, the intercept term increases to account for a higher overall prevalence in those with a 0 exposure level and conversely for a higher value in the CI. Each of these respectively lead to lower and higher bounds for the CI.
To answer your question outright, you need a knowledge of the baseline prevalence of the outcome to obtain correct confidence intervals. Data from case-control studies would rely on other data to inform this.
Alternately, you can use the delta method if you have the full covariance structure for the parameter estimates. An equivalent parametrization for the OR to RR transformation (having binary exposure and a single predictor) is:
$$RR = \frac{1 + \exp(-\beta_0)}{1+\exp(-\beta_0-\beta_1)}$$
And using multivariate delta method, and the central limit theorem which states that $\sqrt{n} \left( [\hat{\beta}_0, \hat{\beta}_1] - [\beta_0, \beta_1]\right) \rightarrow_D \mathcal{N} \left(0, \mathcal{I}^{-1}(\beta)\right)$, you can obtain the variance of the approximate normal distribution of the $RR$.
Note, notationally this only works for binary exposure and univariate logistic regression. There are some simple R tricks that make use of the delta method and marginal standardization for continuous covariates and other adjustment variables. But for brevity I'll not discuss that here.
However, there are several ways to compute relative risks and its standard error directly from models in R. Two examples of this below:
x <- sample(0:1, 100, replace=T)
y <- rbinom(100, 1, x*.2+.2)
glm(y ~ x, family=binomial(link=log))
library(survival)
coxph(Surv(time=rep(1,100), event=y) ~ x)
http://research.labiomed.org/Biostat/Education/Case%20Studies%202005/Session4/ZhangYu.pdf
Best Answer
Work on the log scale for as long as you can and then convert to the scale of odds at the last moment. So compute the confidence interval on the log scale and then convert the limits and the estimate to the odds scale as in your step 2. I suppose you could compute a standard error for the odds using the delta method but the way I suggest is simpler I believe.