If you are willing to assume that $Y$ has a symmetric distribution within the two groups, then the medians of the two groups (i.e., $q_{21}$ and $q_{22}$) could be used in place of the means. Furthermore, if you are willing to assume that $Y$ is normally distributed within the two groups, then you could make use of the relationship between the IQR and the SD for the normal distribution, namely, $SD \approx IQR / 1.35$. So, you can compute the two IQRs with $IQR_1 = q_{31} - q_{11}$ and $IQR_2 = q_{32} - q_{12}$, transform them to SDs, pool those two SDs in the usual manner, and then you have all of the pieces to compute the standardized mean difference.

**Example:** For your example data, this would be $$IQR_1 = 174 - 58 = 116$$ $$IQR_2 = 158 - 31 = 127,$$ so $$SD_1 = 116 / 1.35 = 85.93$$ $$SD_2 = 127 / 1.35 = 94.07.$$ Therefore, $$SD_p = \sqrt{\frac{(80-1)85.93^2 + (46-1)94.07^2}{80+46-2}} = 88.97.$$ And finally: $$d = \frac{85-79}{88.97} = 0.07$$ Now you could use the usual equation to estimate the sampling variance of $d$ (Hedges & Olkin, 1985): $$v = \frac{1}{80} + \frac{1}{46} + \frac{0.07^2}{2(80+46)} = 0.034.$$

**Remarks:** Under normality, $d$ should be an okay estimator of the true SMD. However, the use of medians in place of means and the estimation of the SDs via the IQRs involves a loss of precision. The usual equation for the sampling variance of $d$ does not reflect that, so it yields values that are probably too small (on average).

Also, the appropriateness of this method hinges on the symmetry/normality assumption. Unfortunately, authors typically choose to report medians and IQRs whenever they suspect that $Y$ has a non-normal/symmetric distribution. So, I would regard this method only as a rough approximation.

References:

Hedges, L. V., & Olkin, I. (1985). *Statistical methods for meta-analysis.* Orlando: Academic Press.

It is unbiased, let's see: Let the linear model be $Y=X\beta +e$, in matrix form, with $E e=0$ and the variance-covariance matrix of the errors $e$ be $\Omega$. We use for weights the matrix $W$. Then the weighted linear least squares estimator is
$$
\hat{\beta} = (X'WX))^{-1} X'WX Y
$$
and we can calculate its expectation as
$$
E \hat{\beta} = (X'WX)^{-1} X\cdot W E Y = (X'WX)^{-1} X\cdot W X\beta =\beta
$$
and you can observe that the variance-covariance matrix $\Omega$ do not play any role in the computations!

Of course, if the weight matrix $W$ is estimated from the data in some way, then the above analysis is inadequate! This website Engineering Statistics Handbook
discusses this problem, and gives references. Scroll down to "Disadvantages of weighted least squares". Their advice is:

It is important to remain aware of this potential problem, and to only
use weighted least squares when the weights can be estimated precisely
relative to one another [Carroll and Ruppert (1988), Ryan (1997)].

## Best Answer

You are asking about what is called "weighted least squares". The idea is that if for some $X_i$ values (values of the independent variable) your observations show more variation in the dependent variable $Y_i$ than is the case for other $X_i$ values, those with more variation would also have larger residuals around the regression line and hence influence the sum of squared residuals more than the observations with smaller variation around the line do. You clearly would in general not want this to be the case.

If you know the std. deviation $\sigma_i$ of your $Y$ data for each particular $X_i$ value, you can transform your $Y_i$ and $X_i$ data by dividing them through the known std. deviation $\sigma_i$. If you then regress $\frac{Y_i}{\sigma_i}$ on $\frac{X_i}{\sigma_i}$ the residuals in this new regression do have the same variance for each $X_i$ value.

Mostly, the true $\sigma_i$ are unknown, and instead the sample estimates $s_i$ are used. In the situation you describe you know the std. errors of the means in all the studies, and thus you can divide through these std. errors.

EDIT

There is a tricky detail I forgot to mention! Also, the intercept has to be divided by $s_i$ and finally, the residual $r_i$ term also. I will demonstrate this in the R script below too. So, the regression equation changes from

$y_i = b_0 + b_1x_i + r_i$

into

$\frac{y_i}{s_i} = b_0\frac{1}{s_i} + b_1\frac{x_i} {s_i} + \frac{r_i}{s_i}$

In the "transformed" regression equation, the sum of squares of the (transformed) residuals which is minimized is $\sum\frac{r_i^2}{\ s_i^2}$. Due to the squares, it is said that the "regression weights" are equal to $\frac{1}{s_i^2}$.

If you would use for the weights $\frac{9}{s_i^2}$ instead of simply $\frac{1}{s_i^2}$, then the sum of squares of the residuals would be nine times as high, and hence the residual standard error would be $\sqrt9=3$ times as high, as you can verify in the script below, comparing the results of model2 and model3. But for the regression coefficients and their std. error and p-values nothing changes!

The results of this script are: