It is true that the median is more robust (subject to outliers) than the mean.
My understanding is that the reason statistics tends to use the mean (and squared errors for that matter) is that in the long run, on average, assuming symmetrical distributions, they get closer to the true answer than medians and absolute deviations.
However, if you are not interested in being correct on average over the long run and are more interested in being close to right on any given point estimate, the median is probably a reasonable statistic for you to select.
It is unclear from your question what exactly you want to do with your "error estimate". Do you want to use it to do a statistical test? If all you need is another summary statistic to describe the central tendency of dispersion around your observed measure of central tendency AND you want to continue to leverage whatever 'advantage' the median is giving you... then I would recommend calculating the median absolute deviation. That is the median of |X-Mdn|.
It's certainly possible to place some bounds on the median, but without further assumptions they might potentially be pretty weak bounds. The problem is that the only gauge you have on how skew it might be (particularly, in the sense of the second Pearson skewness) is the relative positions of the extrema to the mean, and they're typically a very weak indicator of that. Adding in the fact that the variable is nonnegative gives a second very weak indicator of skewness (the relative size of the standard deviation and mean).
But the second Pearson skewness does give us a bound: for a distribution, the median cannot be more than one standard deviation from the mean. (For a sample, because of the effect of the usual Bessel correction on standard deviation, it must lie somewhat inside those limits.)
If the standard deviation is small, that may be adequate for some purposes.
If we denote the median as $\stackrel{\sim}{x}$, the mean as $\bar{x}$, the usual sample standard deviation as $s_{n-1}$ (and let $s_n=\sqrt{\frac{n-1}{n}}s_{n-1}$ be the uncorrected s.d.), the minimum as $x_{(1)}$ and the maximum as $x_{(n)}$ then naively, we can immediately say that
$$\max(x_{(1)},\bar{x}-s_n)\leq\,\,\stackrel{\sim}{x}\,\,\leq \min(x_{(n)},\bar{x}+s_n)\,.$$
By more careful consideration of all the information, knowing the minimum and maximum might bound the result still further, but my guess is not necessarily by very much (it may help more in some cases than others). Knowing the sample size, $n$, may also add some important information, particularly if $n$ is small.
The fact that the variable is non-negative might help. Markov's inequality suggests that the median cannot be more than twice the mean, perhaps that may sometimes improve the bound from the mean plus a standard deviation (though, if the s.d. were greater than the mean, you'd usually expect the median to be lower than the mean; again it may be possible to get better bounds still).
Anyway, adding that bound to our previous naive bounds, we have:
$$\max(x_{(1)},\bar{x}-s_n)\leq\,\,\stackrel{\sim}{x}\,\,\leq \min(x_{(n)},\bar{x}+s_n,2\bar{x})\,.$$
(In that situation we also know that the median is above $0$, but given we know $x_{(1)}$, that knowledge doesn't ever improve the lower bound.)
Edit: I simulated a few data sets from different distributions (partly to see how the bounds behaved and partly as a double check that I hadn't made any egregious errors). One of the examples did have the property that $2\bar x$ was a bit less than $\bar x +s_n$ (thus reducing the upper bound on the median, so adding that third component does sometimes help), but as I expected might often be the case, the actual median was less than the mean (so it didn't make the upper bound very close).
Still, the intervals did actually enclose the median for every example I did.
If you assumed some distributional form (like, say, normality), then of course you can get much better estimates (/intervals).
Best Answer
TL;DR - I'm not aware of specific names being given to different estimators of sample medians. Methods to estimate sample statistics from some data are rather fussy and different resources give different definitions.
In Hogg, McKean and Craig's Introduction to Mathematical Statistics, the authors provide a definition of medians of random samples, but only in the case that there are an odd number of samples! The authors write
The authors provide no guidance on what to do if you have an even number of samples. (Note that $Y_i$ is the $i$th smallest datum.)
But this seems unnecessarily restrictive; I would prefer to be able to define a median of a random sample for even or odd $n$. Moreover, I would like the median to be unique. Given these two requirements, I have to make some decisions about how to best find a unique sample median. Both Algorithm A and Algorithm B satisfy these requirements. Imposing additional requirements could eliminate either or both from consideration.
Algorithm B has the property that half the data fall above the value, and half the data fall below the value. In light of the definition of the median of a random variable, this seems nice.
Whether or not a particular estimator breaks unit tests is a property of the unit tests -- unit tests written against a specific estimator won't necessarily hold when you substitute another estimator. In the ideal case, the unit tests were chosen because they reflect the critical needs of your organization, not because of a doctrinaire argument over definitions.