Let's cover each case in turn.
Two Independent Samples
Let $\bar{x}_1$ and $\bar{x}_2$ denote the observed means in the first and second group, respectively, $s_1$ and $s_2$ the standard deviations, and $n_1$ and $n_2$ the sample sizes. Then the log-transformed ratio of means (also called log response ratio) is given by $$y = \ln(\bar{x}_1 / \bar{x}_2),$$ for which we can estimate the sampling variance with the equation $$Var[y] = \frac{s_1^2}{n_1 \bar{x}_1^2} + \frac{s_2^2}{n_2 \bar{x}_2^2}.$$ See, for examples, Hedges et al. (1999).
Two Dependent Samples
If you have two dependent samples (e.g., because the same units of analysis have been measured twice, such as before and after a particular treatment), then let $\bar{x}_1$ and $\bar{x}_2$ denote the means at the first and second measurement occasion, $s_1$ and $s_2$ analogously for the standard deviations, and now there is only $n$ for the size of the group. Again, we can define the log response ratio as $$y = \ln(\bar{x}_1 / \bar{x}_2).$$ The sampling variance can now be estimated with $$Var[y] = \frac{s_1^2}{n \bar{x}_1^2} + \frac{s_2^2}{n \bar{x}_2^2} - \frac{2 r s_1 s_2}{\bar{x}_1 \bar{x}_2 n},$$ where $r$ is the correlation of the measurements between the two measurement occasions. See Lajeunesse (2011). The same equation can be used in a matched-pairs design, except that subscripts 1 and 2 represent the two groups.
Note that you will need an estimate of the correlation to use this equation. If it is not reported or can be derived based on other information reported in a study, you could try contacting the authors. Alternatively, you may just have to make a reasonable guess and then conduct a sensitivity analysis in the end to make sure that the conclusions from the meta-analysis do not depend on the guess.
References
Hedges, L. V., Gurevitch, J., & Curtis, P. S. (1999). The meta-analysis of response ratios in experimental ecology. Ecology, 80, 1150-1156.
Lajeunesse, M. J. (2011). On the meta-analysis of response ratios for studies with correlated and multi-group designs. Ecology, 92, 2049-2055.
If you meta-analyze a mean differences with weights of $n$ instead of by $1/\text{SE}^2$ (inverse variance) - assuming groups of equal size are being compared - this gets you an appropriate average effect estimate under the assumption that variability is the same across studies. I.e. the weights would be proportional to the ones you do would use, if the standard errors were all exactly $2\hat{\sigma}/\sqrt{n}$ for a standard deviation $\sigma$ that is assumed to be identical across trials. You will no longer get a meaningful overall standard error or confidence interval for your overall estimate though, because you are throwing away the information $\hat{\sigma}$ on the sampling variability.
Also note that if groups are not of equal size $n$ is not the correct weight, because the the standard error for the difference of two normal distributions is $\sqrt{\sigma^2_1/n_1 + \sigma^2_2/n_2}$ and this only simplifies to $2\sigma/\sqrt{n}$, if $n_1=n_2=n/2$ (plus $\sigma=\sigma_1=\sigma_2$).
You could of course impute the missing standard errors under the assumption that $\sigma$ is the same across the studies. Then studies without a reported standard error have the same underlying variability as the average of the studies, for which you know it and that's easy to do.
Another thought is that using untransformed US dollars or US dollars per unit might or might not be problematic. Sometimes it can be desirable to use e.g. a log-transformation to meta-analyze and then to back-transform afterwards.
Best Answer
The question is difficult to answer, because it is so indicative of a general confusion and muddled state-of-affairs in much of the meta-analytic literature (the OP is not to blame here -- it's the literature and the description of the methods, models, and assumptions that is often a mess).
But to make a long story short: No, if you want to combine a bunch of estimates (that quantify some sort of effect, a degree of association, or some other outcome deemed to be relevant) and it is sensible to combine those numbers, then you could just take their (unweighted) average and that would be perfectly fine. Nothing wrong with that and under the models we typically assume when we conduct a meta-analysis, this even gives you an unbiased estimate (assuming that the estimates themselves are unbiased). So, no, you don't need the sampling variances to combine the estimates.
So why is inverse-variance weighting almost synonymous with actually doing a meta-analysis? This has to do with the general idea that we attach more credibility to large studies (with smaller sampling variances) than smaller studies (with larger sampling variances). In fact, under the assumptions of the usual models, using inverse-variance weighting leads to the uniformly minimum variance unbiased estimator (UMVUE) -- well, kind of, again assuming unbiased estimates and ignoring the fact that the sampling variances are actually often not exactly know, but are estimated themselves and in random-effects models, we must also estimate the variance component for heterogeneity, but then we just treated it as a known constant, which isn't quite right either ... but yes, we kind of get the UMVUE if we use inverse-variance weighting if we just squint our eyes very hard and ignore some of these issues.
So, it's efficiency of the estimator that is at stake here, not the unbiasedness itself. But even an unweighted average will often not be a whole lot less efficient than using an inverse-variance weighted average, especially in random-effects models and when the amount of heterogeneity is large (in which case the usual weighting scheme leads to almost uniform weights anyway!). But even in fixed-effects models or with little heterogeneity, the difference often isn't overwhelming.
And as you mention, one can also easily consider other weighting schemes, such as weighting by sample size or some function thereof, but again this is just an attempt to get something close to the inverse-variance weights (since the sampling variances are, to a large extent, determined by the sample size of a study).
But really, one can and should 'decouple' the issue of weights and variances altogether. They are really two separate pieces that one has to think about. But that's just not how things are typically presented in the literature.
However, the point here is that you really need to think about both. Yes, you can take an unweighted average as your combined estimate and that would, in essence, be a meta-analysis, but once you want to start doing inferences based on that combined estimate (e.g., conduct a hypothesis test, construct a confidence interval), you need to know the sampling variances (and the amount of heterogeneity). Think about it this way: If you combine a bunch of small (and/or very heterogeneous) studies, your point estimate is going to be a whole lot less precise than if you combine the same number of very large (and/or homogeneous) studies -- regardless of how you weighted your estimates when calculating the combined value.
Actually, there are even some ways around not knowing the sampling variances (and amount of heterogeneity) when we start doing inferential statistics. One can consider methods based on resampling (e.g., bootstrapping, permutation testing) or methods that yield consistent standard errors for the combined estimate even when we misspecify parts of the model -- but how well these approaches may work needs to be carefully evaluated on a case-by-case basis.