I feel like the concern should be with the underlying data-generating process and where you suspect 'error' or noise in your data is coming from. The whole point of taking averages is to be able to invoke a law-of-large-numbers argument that noise 'cancels out'.
1) For example, let's say we have measurement error in the amount of revenue generated but not in quantity (in reality this might be due to rounding, which is a very specific and ugly type of error), i.e. a demon decides to add an iid epsilon noise to our observations of $R_{ig}=R^*_{ig}+\epsilon_{ig}$ where $R^*$ is the true revenue generated and what we observe is $R$. Then taking the average over all observed $R$ will minimize the relative influence of the noise, and dividing through by the average of $Q$ should be the most efficient way to proceed. This is $\hat{P}$.
2) However, let's say that instead of observing $R$ and $Q$, we actually observe $Q$ and a noisy measure of price $P=P^*+\epsilon$ (although this begs the question of why we even bother with $Q$ and $R$ in the first place since we already have what we want, $P$, directly), and we use a spreadsheet to find $R$ by multiplying $Q$ and $P$, it makes a lot more sense to calculate $R/Q$ and take averages of the ratios instead, as in $\tilde{P}$.
Roughly speaking, how you want the noise to 'cancel' will determine what averages you take, but you cannot know how the noise cancels unless you first specify where it's coming into play. What if instead of additive noise you had multiplicative noise (multiply $R$ by +/- a few percent, this is actually very similar to part 2)? Then you'd want to take logs, add up the logs (since multiplicate noises are additive in logs), then re-exponentiate. etc.
Edit:
I'd argue that the above answers your 2nd point, since bias properties are a function of the error structure, of which I gave 2 examples, but I'll answer the 3rd question in particular.
If we assume no noise in our observations of $Q$ but additive iid $\epsilon \sim N(0,\sigma^2)$ noise in $R$, then our modeling assumption is $$R_i=R^*_i+\epsilon_i=Q_i P_i+\epsilon_i$$
Solving for $P_i$ is just an OLS of $R_i$ on $Q_i$ with a forced intercept through zero and two subgroups $g$, which means we can run a Chow test for equality of $P$ in the two subgroups. Using the example in Wikipedia, you would just have $y_t=b_1 x_{1t}+\epsilon$ and $y_t=b_2 x_{2t} + \epsilon$ for your two groups, and $y=R, b=P,x=Q$
If you insist on using the quotient estimators directly (which you shouldn't if you have enough assumptions to use an OLS-based method), then still assuming additive errors gives $$\sum_i R_i \sim N(\sum_i R_i^*, N\sigma^2)\\ \sum_i R_i / \sum_i Q_i \sim N(P_i, \sigma^2 \frac{N}{(\sum Q_i)^2})$$
But then you have to estimate $\sigma^2$ which is usually done by taking residuals after performing an OLS fit anyways. As before, any discussion of variance properties hinges on the properties of the underlying DGP and where the noise is: if we assumed $Q$ was measured with error we can't even analytically derive the variance since the variance of a quotient of 2 random variables is generally a mess.
Best Answer
Estimator of what? If you want to estimate $E\left(\frac{y}{x}\right)$, then what you have proposed, $\overline{\;\frac{y}{x}}=\frac{1}{N} \sum_i\frac{y_i}{x_i}$, the sample mean of $\frac{y_i}{x_i}$ is often a dandy estimator (unbiased, consistent, etc). On the other hand, if you want to estimate $\frac{E(y)}{E(x)}$, then it would be better to go with $\frac{\overline{y}}{\overline{x}}$. Under pretty weak assumptions, it's consistent, at least, though not likely unbiased.
The real question is, what do you want to estimate? Suppose $y$ is dollars spent on food and $x$ is dollars in income and $i$ is a family. Then, the ratio $\frac{y_i}{x_i}$ is the proportion of family $i$'s income spent on food. The parameter $E(\frac{y}{x})$ is the average proportion of income spent on food over families. The parameter $\frac{E(y)}{E(x)}$ is the proportion of aggregate income spent on food. There is no reason in the world for these two things to be the same. Here is an example population:
\begin{align} \begin{array}{r r r} \text{family} & \text{Income} & \text{Food} & \text{ratio}\\ 1 & 100000 & 20000 & 0.2\\ 2 & 10000 & 8000 & 0.8\\ 3 & 10000 & 8000 & 0.8 \end{array} \end{align}
So, the average ratio is 0.6 ($\frac{0.2+0.8+0.8}{3})$), while the ratio of the averages is 0.3 ($\frac{20000+8000+8000}{100000+10000+10000}$). Neither one of these is right. Neither one of these is wrong. They are just estimating different things. The aggregate ratio between food spending and income is 0.3. The average family, on the other hand, has a ratio between food spending and income of 0.6.
One way to think about it is that the average of the ratios weights each family's ratio the same in computing the mean, and that the ratio of the averages weights the rich family's ratio more. Watch:
\begin{align} \frac{\overline{y}}{\overline{x}} &= \frac{\sum y_i}{\sum x_i}\\ &= \sum_i\frac{1}{\sum x_i}y_i\\ &= \sum_i\frac{x_i}{\sum x_i}\frac{y_i}{x_i}\\ & \\ \overline{\;\frac{y}{x}} &= \sum_i \frac{1}{N}\frac{y_i}{x_i} \end{align}
In the ratio of means, family $i$'s ratio "counts" $\frac{x_i}{\sum x_i}$ in the overall mean --- it counts in proportion to its income. In the mean of the ratios, each family's ratio counts the same, $\frac{1}{N}$.
Which one of these you want depends on what you are using it for. If I asked you a question like "If I give every family \$1 extra, how much extra will be spent on food," how might you approach the problem? Well, you might decide to assume that each family's ratio will stay the same after the experiment (this assumption will drive economists crazy, conflating as it does marginal and average, but that just adds to the fun). Then, the increase in food spending from this experiment will be $N \cdot E\left(\frac{y}{x}\right)$. On the other hand, if I said that I was going to give away $N$ dollars to these families in ratio to their current income (more to rich families, less to poor), then the exact same reasoning would lead you to expect a $N \cdot \frac{E(y)}{E(x)}$ dollar increase in spending on food.
The usual reason people give for liking the ratio of averages is that it allows you to do some kinds of arithmetic more easily. So, for example, suppose I say, "There is a population of 50 families with average income equal to \$56,000. What will their total food spending be?" If you have the ratio of averages calculated, then you can answer something like "Assuming the distribution of income in your population of families is the same as the distribution of income in the sample I have, then total spending on food should be about $50\cdot\frac{\overline{y}}{\overline{x}}\cdot\$56,000$.