First, it's probably best to refrain from using the terminology "standard deviation" in the context of regression coefficients, as there's potential confusion in whether you mean the standard deviation of the sampling distribution of a statistic or the standard deviation of some value among members of the underlying population. The former depends on sample size, the second doesn't (although estimates of it do).
The term "standard error" is better here: it specifically has the former meaning, as both @mdewey and Wikipedia note. At least in R, the terminology "standard error" is used for reports of error estimates in regression coefficients.
Second, if you are evaluating hazard ratios from survival models, those are the exponentiations of coefficients determined by maximum (partial) likelihood methods with asymptotic normality assumed for the coefficient estimates in the original scale. t-statistics aren't involved in setting their confidence intervals (CI). That's also true for most risk ratios, rate ratios, and response ratios that you would see reported from logistic or Poisson regression models. It's wise to check the reported methods for statistical details; if the "significance" is based on a z-test or a Wald test then normality was assumed.
Third, in terms of your sensitivity analysis, it probably will be easiest and safest to sample from the assumed normal distributions of the regression coefficients and only move to the hazard-ratio scale at the last stage. If you are going to be doing simulations as part of your sensitivity analysis, the software will probably assume you are providing the regression coefficients that go into the linear-predictor values, not the hazard ratios.
Fourth, your desired formula for estimating the standard error of the coefficient estimate (and thus the standard deviation of the corresponding normal distribution you might use for sensitivity analysis) is essentially what you've written, except that your text in Step 2 doesn't agree with what you then do. For a normally distributed statistic, symmetric 95% CI (as usually assumed in the regression-coefficient scale) are at the 2.5th and 97.5th percentiles of the estimated distribution. Back calculating from the 95% upper and lower CI (UCI, LCI) of a hazard ratio thus provides a standard error estimate in the regression-coefficient scale: $$\text{SE}=\frac{\ln \text{UCI} - \ln \text{LCI}}{2 * 1.96}$$
with the $1.96$ value in the denominator changed as you note if the original CI were instead 90% CI or 99% CI.
Best Answer
Your colleague is correct.
In the US, 1.6 to 1.7 m is near the middle of the range of adult female heights. According to Wolfram Alpha, which summarizes NHANES 2006 data, the height distribution in this range should look close to this:
This is extremely close to uniform: its mean is 1.649 m and its standard deviation is 0.0287 m (whereas a uniform distribution in this range would have a mean of 1.650 m and SD of 0.0289 m). Its skewness coefficient is only 0.054.
Accordingly, independent samples drawn from this distribution will have means that are close to normally distributed. Here, for instance, is a histogram of means of 10,000 samples of just four heights drawn (independently) from this distribution:
It is only very, very slightly non-normal (a Kolmogorov-Smirnov test rejects normality at p=0.94%, which is amazingly large given there are 10,000 data points). For the purpose of planning comparisons of mean heights among random groups of women, the normal approximation will work well. Standard power calculations ought to give good guidance.