The problem is that you haven't really defined what it means to have a good or fair rating. You suggest in a comment on @Kevin's answer that you don't like it if one bad review takes down an item. But comparing two items where one has a "perfect record" and the other has one bad review, maybe that difference should be reflected.
There's a whole (high-dimensional) continuum between median and mean. You can order the votes by value, then take a weighted average with the weights depending on the position in that order. The mean corresponds to all weights being equal, the median corresponds to only one or two entries in the middle getting nonzero weight, a trimmed average corresponds to giving all except the first and last couple the same weight, but you could also decide to weight the $k$th out of $n$ samples with weight $\frac{1}{1 + (2 k - 1 - n)^2}$ or $\exp(-\frac{(2k - 1 - n)^2}{n^2})$, to throw something random in there. Maybe such a weighted average where the outliers get less weight, but still a nonzero amount, could combine good properties of median and mean?
Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics:
$$\bar{x} = \sum_i w_i x_{(i)},\qquad w_i=\frac{_1}{^n}\,.$$
Then by replacing that ordinary average of order statistics with some other weight function, we get a notion of "generalized mean" that accounts for order.
In that case, a host of potential measures of center become "generalized sorts of means". In the case of the median, for odd $n$, $w_{(n+1)/2}=1$ and all others are 0, and for even $n$, $w_{\frac{n}{2}}=w_{\frac{n}{2}+1}=\frac{1}{2}$.
Similarly, if we look at M-estimation, location estimates might also be thought of as a generalization of the arithmetic mean (where for the mean, $\rho$ is quadratic, $\psi$ is linear, or the weight-function is flat), and the median falls also into this class of generalizations. This is a somewhat different generalization than the previous one.
There are a variety of other ways we might extend the notion of 'mean' that could include median.
Best Answer
Consider what a trimmed mean is: In the prototypical case, you first sort your data in increasing order. Then you count up to the trimming percentage from the bottom and discard those values. For example a 10% trimmed mean is common; in that case you count up from the lowest value until you've passed 10% of all the data in your set. The values below that mark are set aside. Likewise, you count down from the highest value until you've passed your trimming percentage, and set all values greater than that aside. You are now left with the middle 80%. You take the mean of that, and that is your 10% trimmed mean. (Note that you can trim unequal proportions from the two tails, or only trim one tail, but these approaches are less common and don't seem as applicable to your situation.)
Now think of what would happen if you calculated a 50% trimmed mean. The bottom half would be set aside, as would the top half. You would be left with only the single value in the middle (ordinally). You would take the mean of that (which is to say, you would just take that value) as your trimmed mean. Note however, that that value is the median. In other words, the median is a trimmed mean (it is a 50% trimmed mean). It is just a very aggressive one. It assumes, in essence, that 99% of your data are contaminated. This gives you the ultimate protection against outliers at the expense of the ultimate loss of power / efficiency.
My guess is a median / 50% trimmed mean is much more aggressive than is necessary for your data, and is too wasteful of the information available to you. If you have any sense of the proportion of outliers that exist, I would use that information to set the trimming percentage and use the appropriate trimmed mean. If you don't have any basis to choose the trimming percentage, you could select one by cross validation, or use a robust regression analysis with only an intercept.