Your question is a perfect example of regression models with quantitative and qualitative predictors. Specifically, the three age groups -- $1,2, \& \,3$ -- are the qualitative variables and the quantitative variables are shopping habits and weight loss (I am guessing this because you are calculating correlations).
I must stress that this is much better way of modeling than calculating separate group-wise correlations because you have more data to model, hence your error estimates (p-values, etc) will be more reliable. A more technical reason is the resulting higher degrees of freedom in the t-test statistic for testing the significance of the regression coefficients.
Operating by the rule that $c$ qualitative predictors can be handled by $c-1$ indicator variables, only two indicator variables, $X_1, X_2$, are needed here that are defined as follows:
$$
X_1 = 1 \text{ if person belongs to group 1}; 0 \text{ otherwise} .
$$
$$
X_2 = 1 \text{ if person belongs to group 2}; 0 \text{ otherwise}.
$$
This implies that group $3$ is represented by $X_1=0, X_2=0$; represent your response -- shopping habit as $Y$ and the quantitative explanatory variable weight loss as $W$. You are now fit this linear model
$$
E[Y]=\beta_0 + \beta_1X_1 + \beta_2X_2 + \beta_3W.
$$
The obvious question is does it matter if we change $W$ and $Y$ (because I randomly chose shopping habits as the response variable). The answer is, yes -- the estimates of the regression coefficients will change, but the test for "association" between conditioned on groups (here t-test, but it is same as testing for correlation for a single predictor variable) won't change. Specficially,
$$
E[Y]= \beta_0 + \beta_3W \text{ -- for third group},
$$
$$
E[Y]= (\beta_0 + \beta_2)+\beta_3W \text{ -- for second group},
$$
$$
E[Y]= (\beta_0 + \beta_1)+\beta_3W \text{ -- for first group},
$$
This is equivalent to having 3 separate lines, depending on the groups, if you plot $Y$ vs $W$. This is a good way to visualize what you are testing for makes sense (basically a form of EDA and model checking, but you need to distinguish between grouped observations properly). Three parallel lines indicate no interaction between the three groups and $W$, and a lot of interaction implies these lines will be intersecting each other.
How do the tests that you ask. Basically, once you fit the model and have the estimates, you need to test some contrasts. Specifically for your comparisons:
$$
\text{Group 2 vs Group 3: } \beta_2 + \beta_0 - \beta_0 = 0,
$$
$$
\text{Group 1 vs Group 3: } \beta_1 + \beta_0 - \beta_0 = 0,
$$
$$
\text{Group 2 vs Group 1: } \beta_2 + \beta_0 - (\beta_0+\beta_1) = 0.
$$
If I understand this correctly, you have
$B$. Measured values of biomass in grams, which you regard as your best measurement.
$C_1$ and $C_2$. Two measures of percent cover, which you regard as proxies for biomass.
The main question is surely the relationships between $B$ on the one hand and $C_1$ and $C_2$ on the other. As $B$ and $C_1$, $C_2$ are in quite different units of measurement, you can only assess accuracy by establishing what functions best approximate $B$ as predicted from $C_1$ and $C_2$ respectively.
Calculating a correlation is pertinent to the extent that the relationships are approximately linear, and my guess would be to doubt that, especially as $B$ is unbounded (no mathematical limit on its upper values) whereas the $C$s are bounded. I'd expect some kind of nonlinear relation here.
Transforming percents or correlations are secondary details; it may be that arcsine transformations (by which you may well mean "arcsine of square root" as that is a common transformation here) help with nonlinearity, or it may not be. Nor is it clear that any standardisation will help at all: standardisation of each variable separately makes sense if there are linear relationships of the form $\hat B = a + bC$, but if so you are better off working directly with those relationships.
To give good advice we need to see plots of $B$ vs $C_1$ and $B$ vs $C_2$; there is, in my view, no correct method for this problem that can be given in abstraction. Listings of the data would be most helpful if they can be provided easily.
Best Answer
The GLM approach is pretty painful here, but your GLM is really doing a regression analysis, so if the variables have a different scale, you would reject equality just because of that. By standardizing the variables, the coefficients are equivalent to correlations, so you are doing the test on equality of correlations, and scale differences don't count.