If this is a completely balanced within-subject design with $27\times 6=162$ observations, then you can actually calculate the marginal means: simply average over the levels of the second factor. Of course, you have to be sure that averaging over different conditions is meaningful for your planned experiment - do you expect each of those conditions to be present with about 1/3 probability?
The real difficulty is with the variance of the difference. It is well known that $$Var(X-Y) = Var(X) + Var(Y) - 2 SD(X)SD(Y)Corr(X,Y)$$
The problem is that you don't know the within-subject correlation.
Option 1. You could just to guess at a value: would you expect the correlation to be high or low? Since higher correlation will lead to lower variance, you could assume the worst case scenario of 0 correlation, and be guaranteed to overestimate the required sample size (unless the true correlation is negative, but that is rare).
Option 2. If the published results have more information, like a p-value from a test, you could try to figure out the correlation. For a complicated design like this one, it might be difficult to do analytically, but you could try a simulation approach. Given a correlation coefficient, simulate data with the given means and variances, run the test and check the p-value. Modify the correlation coefficient until you get close to the published result.
The mean $E(X+Y)$ is equal to the sum of the means $E(X)$ and $E(Y)$, i.e., in your case $2+3.8=5.8$.
The standard deviation is the square root of the variance
$Var(X+Y) = Var(X)+Var(Y)+2Cov(X,Y)$.
If you assume that the use of herbicide and fungicide are independent - a bold assertion, although I don't know much about agriculture - then this simplifies to
$Var(X+Y)=Var(X)+Var(Y)$
and allows you to calculate the standard deviation by observing that
$Var(X)=0.82^2=0.6724$
$Var(Y)=2.5^2=6.25$
Which leads us to find that $\sigma_3=\sqrt{0.6724+6.25} \approx 2.631$
Best Answer
The Morgan-Pitman test is the clasisical way of testing for equal variance of two dependent groups.
For $n$ pairs of randomly sampled observations
$(X_{11}, X_{12}),...,(X_{n1},X_{n2})$
define
$U_i = X_{i1}-X_{i2}$
and
$V_i = X_{i1}+X_{i2}$
for $(i=1,...,n)$.
Then under
$H_0: \sigma_1^2=\sigma_2^2$
the correlation of $U$ and $V$ is zero.
If the distributions of the two variables differ in shape then you should use a robust method of testing the hypothesis of $\rho_{uv}=0$.
A good description is in Wilcox's Modern Statistics for the Social and Behavioral Sciences (Chapman & Hall 2012), including alternative ways of comparing robust measures of scale rather than just comparing the variance.