You should simply treat your SE as SD, and use exactly the same error propagation formulas. Indeed, standard error of the mean is nothing else than standard deviation of your estimate of the mean, so the math does not change. In your particular case when you estimate SE of $C=A-B$ and you know $\sigma^2_A$, $\sigma^2_B$, $N_A$, and $N_B$, then $$\mathrm{SE}_C=\sqrt{\frac{\sigma^2_A}{N_A}+\frac{\sigma^2_B}{N_B}}.$$
Please note that another option that could potentially sound reasonable is incorrect: $$\mathrm{SE}_C \ne \sqrt{\frac{\sigma^2_A\sigma^2_B}{N_A+N_B}}.$$
To see why, imagine that $\sigma^2_A=\sigma^2_B=1$, but in one case you have a lot of observations and another case only one: $N_A=100, N_B=1$. The standard error of the mean of the first group is 0.1, and of the second it is 1. Now if you use the second (incorrect) formula, you would get approximately 0.14 as the joint standard error, which is far too small given that you second measurement is known $\pm 1$. The correct formula gives $\approx 1$, which makes sense.
The quoted formula is not quite right. Let's derive the correct one.
Since the population mean (or any other constant) may be subtracted from every value in a population $S$ without changing the variance of the population or of any sample thereof, we might as well assume the population mean is zero. Letting the values in the population be $\{x_i\, \vert\, i\in S\}$, this implies
$$0 = \sum_{i\in S} x_i.$$
Squaring both sides maintains the equality, giving
$$0 = \sum_{i,j\in S}x_ix_j = \sum_{i\in S}x_i^2 + \sum_{i \ne j \in S} x_ix_j,$$
whence
$$\sum_{i\ne j \in S} x_ix_j = -\sum_{i\in S} x_i^2.$$
This key result will be employed later.
Let $S$ have $N$ elements. Because its mean is zero, its variance is the average squared value:
$$s^2 = \frac{1}{N}\sum_{i\in S}x_i^2.$$
(Please note that there can be no dispute about the denominator of $N$; in particular, it definitely is not $N-1$: this is a population variance, not an estimator.)
To find the variance of the sample distribution of the mean, consider all possible $n$-element samples. Each corresponds to an $n$-subset $A\subset S$ and has mean
$$\frac{1}{n}\sum_{i\in A} x_i.$$
Since the mean of all the sample means equals the mean of $S$, which is zero, the variance of these $\binom{N}{n}$ sample means is the average of their squares:
$$s_n^2 = \frac{1}{\binom{N}{n}} \sum_{A\subset S}\left(\frac{1}{n}\sum_{i\in A}x_i\right)^2 = \frac{1}{n^2\binom{N}{n}} \sum_{A\subset S}\sum_{i,j\in A}x_ix_j \\= \frac{1}{n^2\binom{N}{n}} \sum_{A\subset S}\left(\sum_{i\in A}x_i^2 + \sum_{i\ne j\in A}x_ix_j\right) .$$
(Once again, $\binom{N}{n}$, not $\binom{N}{n}-1$, is the correct denominator: this is the variance of a collection of $\binom{N}{n}$ numbers, not an estimator of anything.)
Fix, for a moment, any particular index $i$. The value $x_i$ will appear in $\binom{N-1}{n-1}$ samples, because each such sample supplements $x_i$ with $n-1$ more elements of $S$ out of the $N-1$ remaining elements (sampling is without replacement, remember). Its contribution to the right hand side therefore equals $\binom{N-1}{n-1}x_i^2$.
Also fixing an index $j\ne i$, similar reasoning shows the product $x_ix_j$ appears in $\binom{N-2}{n-2}$ samples, thereby contributing $\binom{N-1}{n-1}x_ix_j$ to the right hand side. Therefore, upon summing over all such $i$ and $j$ in $S$,
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1}\sum_{i\in S}x_i^2 + \binom{N-2}{n-2}\sum_{i\ne j\in S}x_ix_j\right).$$
Plug the first result into that last sum:
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1}\sum_{i\in S}x_i^2 + \binom{N-2}{n-2}\left(-\sum_{i\in S}x_i^2\right)\right).$$
It is now straightforward to relate this to the variance of $S$, because $\sum_{i\in S}x_i^2 = Ns^2$:
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1} - \binom{N-2}{n-2}\right)\left(Ns^2\right) = \frac{s^2}{n}\left(1 - \frac{n-1}{N-1}\right).$$
Thus the sampling variance for sampling with replacment, $\frac{s^2}{n}$, is multiplied by $1 - \frac{n-1}{N-1}$ to obtain the sampling variance for sampling without replacement, $s_n^2$. Accordingly, the multiplicative adjustment for the sampling standard deviation is its square root, $\sqrt{1- \frac{n-1}{N-1}}$. This differs from the quoted formula, which uses $\sqrt{1 - \frac{n}{N}}$.
Two simple checks can give us some comfort concerning the correctness of this result. First, the sample variance of means of samples of size $n=1$, $s_1^2$, obviously equals the population variance $s^2$. The correct formula states
$$s_1^2 = \frac{s^2}{1}\left(1 - \frac{1-1}{N-1}\right) = s^2,$$
as it should. Unfortunately, the quoted formula asserts that $s_1^2 = s^2(\frac{1}{1} - \frac{1}{N})$ which obviously cannot be right. Second, the sample variance of the means of samples of size $n=N$ is zero, because there is no variation, and indeed both formulas give $0$ in this case.
Best Answer
$$y=\log_2 \frac a b=\log_2 a - \log_2 b$$
We know how to propagate errors in subtraction and addition: $$\delta y\approx\delta (\log_2 a)+\delta (\log_2 b)$$
All we need is to handle the logs: $$\delta (\log_2 a)=\frac{\delta a} {a\ln 2}$$
So you get $$\delta y\approx\frac{\delta a} {a\ln 2}+ \frac{\delta b} {b\ln 2}$$ In other words the absolute error of y is the sum of relative errors of a and b.
Of course, you can go for the "exact" formula $$\delta (x+z) = \sqrt{(\delta x)^2+(\delta y)^2}$$ However, remember that it's "exact" to an extent, assuming there are no correlations etc. Hence, I never use it.