It sounds like you're talking about what's sometimes called a regressogram, with a log-scaled x-variable.
There are a number of issues here, not necessarily in logical order:
the quantity you're plotting is a mean, so if you want to plot median absolute deviation, it's the MAD of the means you want.
your suggestion $\text{MAD}/\sqrt n$ leads to the question "when is the MAD of the mean equal to the MAD of the data divided by $\sqrt n$?"
when you say "it seems that median absolute deviation is a better estimator than mean absolute deviation" ... that depends what we're talking about - a better estimator of what?, and under what circumstances?
So, "when is the MAD of the mean equal to the MAD of the data divided by $\sqrt n$?"
The answer is, unlike the situation with standard deviation, this is not generally the case. The reason why standard deviations of averages scale as they do is that variances of independent random variables add (more precisely, the variance of the sum is the sum of the variances when the variables are independent), irrespective of the distributions of the components (as long as the variances all exist). It is this particular property that largely accounts for the popularity of variances and standard deviations.
Neither the median deviation, nor the mean deviation have that property in general.
However, when the data are normal, they will in effect inherit that property, since the ratio of the population mean deviation or median deviation to the standard deviation at a normal will be a constant, normals are closed under convolution, and standard deviations scale that way.
If the data were reasonably close to normal, it could perhaps be adequate.
What else might be done? One way to estimate the standard error of a statistic is via the bootstrap; for the mean deviation - being a mean - this should do well in large samples. Unfortunately, medians don't do so well under the bootstrap, and this issue will carry over to median absolute deviations.
If you have some probability model for your data, there's also simulation as a way of approaching the problem.
RA Fisher's initial rule of thumb was that any excess of two standard errors was worth investigating further. For a Gaussian distribution, looking at both tails, this had a probability of about $4.55\%$ of occurring, which he rounded to $5\%$: this $5\%$ is consequentially in common use as a measure of statistical significance.
It is arbitrary, and does not correspond to $2$ or $1.96$ standard errors in non-Gaussian cases. Examples include $t$-tests with small samples or $\chi^2$-tests.
The standard error of a statistic (often the mean) is the standard deviation of its sampling distribution: it gets its name as being a measure of the scale of the possible error in estimating a parameter.
This is usually not the standard deviation of the population: for example, if the population has an expected value of $\mu$ and standard deviation of $\sigma$ and you take $n$ independent samples with replacement then the sample mean has expected value $\mu$ and a standard deviation of $\frac\sigma{\sqrt{n}}$, which is rather smaller than $\sigma$; intuitively, a larger sample size is likely to lead to smaller errors.
Best Answer
Perhaps this paper,"A Simple Normal Approximation for Weibull Distribution with Application to Estimation of Upper Prediction Limit" by H. V. Kulkarni and S. K., will assist in answering your questions with respect to the UPL.
Link: http://www.hindawi.com/journals/jps/2011/863274/