Regarding the first question, I think this depends on how you plot the data. If you are using a bar graph with the individual means side by side then I would add the error bars for the individual means. If the height of the bar chart represents the difference of the two means then use the standard error for the mean difference.
As to the second question if I understand you correctly you are looking at mean differences on subsets of the data where condition 1 applies in one case and condition 2 in the other. Since this is what you want to show I would use the corresponding standard error (i.e. for condition 1 provide the standard error for the mean difference for the data where condition 1 applies and do it the same way for the mean difference when condition 2 applies.
It sounds like you're talking about what's sometimes called a regressogram, with a log-scaled x-variable.
There are a number of issues here, not necessarily in logical order:
the quantity you're plotting is a mean, so if you want to plot median absolute deviation, it's the MAD of the means you want.
your suggestion $\text{MAD}/\sqrt n$ leads to the question "when is the MAD of the mean equal to the MAD of the data divided by $\sqrt n$?"
when you say "it seems that median absolute deviation is a better estimator than mean absolute deviation" ... that depends what we're talking about - a better estimator of what?, and under what circumstances?
So, "when is the MAD of the mean equal to the MAD of the data divided by $\sqrt n$?"
The answer is, unlike the situation with standard deviation, this is not generally the case. The reason why standard deviations of averages scale as they do is that variances of independent random variables add (more precisely, the variance of the sum is the sum of the variances when the variables are independent), irrespective of the distributions of the components (as long as the variances all exist). It is this particular property that largely accounts for the popularity of variances and standard deviations.
Neither the median deviation, nor the mean deviation have that property in general.
However, when the data are normal, they will in effect inherit that property, since the ratio of the population mean deviation or median deviation to the standard deviation at a normal will be a constant, normals are closed under convolution, and standard deviations scale that way.
If the data were reasonably close to normal, it could perhaps be adequate.
What else might be done? One way to estimate the standard error of a statistic is via the bootstrap; for the mean deviation - being a mean - this should do well in large samples. Unfortunately, medians don't do so well under the bootstrap, and this issue will carry over to median absolute deviations.
If you have some probability model for your data, there's also simulation as a way of approaching the problem.
Best Answer
You should simply treat your SE as SD, and use exactly the same error propagation formulas. Indeed, standard error of the mean is nothing else than standard deviation of your estimate of the mean, so the math does not change. In your particular case when you estimate SE of $C=A-B$ and you know $\sigma^2_A$, $\sigma^2_B$, $N_A$, and $N_B$, then $$\mathrm{SE}_C=\sqrt{\frac{\sigma^2_A}{N_A}+\frac{\sigma^2_B}{N_B}}.$$
Please note that another option that could potentially sound reasonable is incorrect: $$\mathrm{SE}_C \ne \sqrt{\frac{\sigma^2_A\sigma^2_B}{N_A+N_B}}.$$
To see why, imagine that $\sigma^2_A=\sigma^2_B=1$, but in one case you have a lot of observations and another case only one: $N_A=100, N_B=1$. The standard error of the mean of the first group is 0.1, and of the second it is 1. Now if you use the second (incorrect) formula, you would get approximately 0.14 as the joint standard error, which is far too small given that you second measurement is known $\pm 1$. The correct formula gives $\approx 1$, which makes sense.