Let's cover each case in turn.
Two Independent Samples
Let $\bar{x}_1$ and $\bar{x}_2$ denote the observed means in the first and second group, respectively, $s_1$ and $s_2$ the standard deviations, and $n_1$ and $n_2$ the sample sizes. Then the log-transformed ratio of means (also called log response ratio) is given by $$y = \ln(\bar{x}_1 / \bar{x}_2),$$ for which we can estimate the sampling variance with the equation $$Var[y] = \frac{s_1^2}{n_1 \bar{x}_1^2} + \frac{s_2^2}{n_2 \bar{x}_2^2}.$$ See, for examples, Hedges et al. (1999).
Two Dependent Samples
If you have two dependent samples (e.g., because the same units of analysis have been measured twice, such as before and after a particular treatment), then let $\bar{x}_1$ and $\bar{x}_2$ denote the means at the first and second measurement occasion, $s_1$ and $s_2$ analogously for the standard deviations, and now there is only $n$ for the size of the group. Again, we can define the log response ratio as $$y = \ln(\bar{x}_1 / \bar{x}_2).$$ The sampling variance can now be estimated with $$Var[y] = \frac{s_1^2}{n \bar{x}_1^2} + \frac{s_2^2}{n \bar{x}_2^2} - \frac{2 r s_1 s_2}{\bar{x}_1 \bar{x}_2 n},$$ where $r$ is the correlation of the measurements between the two measurement occasions. See Lajeunesse (2011). The same equation can be used in a matched-pairs design, except that subscripts 1 and 2 represent the two groups.
Note that you will need an estimate of the correlation to use this equation. If it is not reported or can be derived based on other information reported in a study, you could try contacting the authors. Alternatively, you may just have to make a reasonable guess and then conduct a sensitivity analysis in the end to make sure that the conclusions from the meta-analysis do not depend on the guess.
References
Hedges, L. V., Gurevitch, J., & Curtis, P. S. (1999). The meta-analysis of response ratios in experimental ecology. Ecology, 80, 1150-1156.
Lajeunesse, M. J. (2011). On the meta-analysis of response ratios for studies with correlated and multi-group designs. Ecology, 92, 2049-2055.
If you meta-analyze a mean differences with weights of $n$ instead of by $1/\text{SE}^2$ (inverse variance) - assuming groups of equal size are being compared - this gets you an appropriate average effect estimate under the assumption that variability is the same across studies. I.e. the weights would be proportional to the ones you do would use, if the standard errors were all exactly $2\hat{\sigma}/\sqrt{n}$ for a standard deviation $\sigma$ that is assumed to be identical across trials. You will no longer get a meaningful overall standard error or confidence interval for your overall estimate though, because you are throwing away the information $\hat{\sigma}$ on the sampling variability.
Also note that if groups are not of equal size $n$ is not the correct weight, because the the standard error for the difference of two normal distributions is $\sqrt{\sigma^2_1/n_1 + \sigma^2_2/n_2}$ and this only simplifies to $2\sigma/\sqrt{n}$, if $n_1=n_2=n/2$ (plus $\sigma=\sigma_1=\sigma_2$).
You could of course impute the missing standard errors under the assumption that $\sigma$ is the same across the studies. Then studies without a reported standard error have the same underlying variability as the average of the studies, for which you know it and that's easy to do.
Another thought is that using untransformed US dollars or US dollars per unit might or might not be problematic. Sometimes it can be desirable to use e.g. a log-transformation to meta-analyze and then to back-transform afterwards.
Best Answer
How to scale the squares in a forest plot?
I'd argue for scaling the size (area) of the squares proportional to the weight that the study contributed to the meta-analysis. By scaling by weight, the area of the square is a direct visual cue of the relative impact a study had on the summary effect. The weight is among other things proportional to the standard error (precision) which, in turn, is usually (but not always!) directly related to the study sample size. This makes the most sense to me because the forest plot is a visual display of a statistical analysis which, in effect, is a weighted mean. Scaling by quality seems problematic to me because quality is difficult to measure objectively and the summary effect is not calculated using "quality-weights" (at least I've never seen it).
This seems to be supported by a number of authors. Steff Lewis and Mike Clarke$^{[1]}$ go into the history of the forest plot and write
Michael Borenstein et al.$^{[2]}$ recommend the same when they explain
This is again mirrored in Jonathan Sterne's book$^{[3]}$:
Lastly, in The Handbook of Research Synthesis and Meta-Analysis$^{[4]}$ we read
A disadvantage of this approach becomes obvious when you want to present the results of a fixed-effects and a random-effects meta-analysis (or just two different analysis methods) in the same forest plot. Different analysis methods likely assign different weights to the studies and so the scaling of the area of the squares becomes ambiguous.
Estimators of between-study variance: Which one to use?
The
metafor
package forR
offers no less than nine different estimators for the amount of heterogeneity:Descriptions of these estimators can be found in references $[4, 5, 6, 7, 8]$. The question remains which one to use? Veroniki et al.$^{[7]}$ and Langan et al.$^{[8]}$ recommend the Paule-Mandel estimator or restricted maximum likelihood based on simulation studies. A never publication by Langan et al.$^{[9]}$ made the following recommendations:
See also this question.
References
$[1]$ Lewis Steff, Clarke Mike. Forest plots: trying to see the wood and the trees BMJ. 2001. 322:1479 [link]
$[2]$ Michael Borenstein, Larry V. Hedges, Julian P.T. Higgins, Hannah R. Rothstein. Introduction to Meta-Analysis. Wiley 2009.
$[3]$ Jonathan Sterne. Meta-Analysis: An Updated Collection from the Stata Journal. Stata Press 2009.
$[4]$ Harris Cooper, Larry V. Hedges, Jeffrey C. Valentine (ed). The Handbook of Research Synthesis and Meta-Analysis. 2nd ed. Russell Sage Foundation 2009.
$[5]$ Rebecca DerSimonian, Raghu Kacker. Random-effects model for meta-analysis of clinical trials: An update. Contemp Clin Trials 28. 2007. 105-114. [link]
$[6]$ Wolfgang Viechtbauer, José Antonio López-López. A Comparison of Procedures to Test for Moderators in Mixed-Effects Meta-Regression Models. Psychological Methods 20(3). 2015. 360-374. [link]
$[7]$ Areti Angeliki Veroniki et al. Methods to estimate the between-study variance and its uncertainty in meta-analysis. Res Syn Meth 7. 2016. 55-79. [link]
$[8]$ Dean Langan, Julian PT Higgins, Mark Simmonds. Comparative performance of heterogeneity variance estimators in meta-analysis: a review of simulation studies. Res Syn Meth 8. 2017. 181-198. [link]
$[9]$ Dean Langan et al. A comparison of heterogeneity variance estimators in simulated random-effects meta-analyses. Res Syn Meth 10. 2019. 83-98. [link]