Your question is a perfect example of regression models with quantitative and qualitative predictors. Specifically, the three age groups -- $1,2, \& \,3$ -- are the qualitative variables and the quantitative variables are shopping habits and weight loss (I am guessing this because you are calculating correlations).
I must stress that this is much better way of modeling than calculating separate group-wise correlations because you have more data to model, hence your error estimates (p-values, etc) will be more reliable. A more technical reason is the resulting higher degrees of freedom in the t-test statistic for testing the significance of the regression coefficients.
Operating by the rule that $c$ qualitative predictors can be handled by $c-1$ indicator variables, only two indicator variables, $X_1, X_2$, are needed here that are defined as follows:
$$
X_1 = 1 \text{ if person belongs to group 1}; 0 \text{ otherwise} .
$$
$$
X_2 = 1 \text{ if person belongs to group 2}; 0 \text{ otherwise}.
$$
This implies that group $3$ is represented by $X_1=0, X_2=0$; represent your response -- shopping habit as $Y$ and the quantitative explanatory variable weight loss as $W$. You are now fit this linear model
$$
E[Y]=\beta_0 + \beta_1X_1 + \beta_2X_2 + \beta_3W.
$$
The obvious question is does it matter if we change $W$ and $Y$ (because I randomly chose shopping habits as the response variable). The answer is, yes -- the estimates of the regression coefficients will change, but the test for "association" between conditioned on groups (here t-test, but it is same as testing for correlation for a single predictor variable) won't change. Specficially,
$$
E[Y]= \beta_0 + \beta_3W \text{ -- for third group},
$$
$$
E[Y]= (\beta_0 + \beta_2)+\beta_3W \text{ -- for second group},
$$
$$
E[Y]= (\beta_0 + \beta_1)+\beta_3W \text{ -- for first group},
$$
This is equivalent to having 3 separate lines, depending on the groups, if you plot $Y$ vs $W$. This is a good way to visualize what you are testing for makes sense (basically a form of EDA and model checking, but you need to distinguish between grouped observations properly). Three parallel lines indicate no interaction between the three groups and $W$, and a lot of interaction implies these lines will be intersecting each other.
How do the tests that you ask. Basically, once you fit the model and have the estimates, you need to test some contrasts. Specifically for your comparisons:
$$
\text{Group 2 vs Group 3: } \beta_2 + \beta_0 - \beta_0 = 0,
$$
$$
\text{Group 1 vs Group 3: } \beta_1 + \beta_0 - \beta_0 = 0,
$$
$$
\text{Group 2 vs Group 1: } \beta_2 + \beta_0 - (\beta_0+\beta_1) = 0.
$$
Question 1
Make a table with those columns, and three rows. Fill in the results. You don't need both r and r-squared.
Question 2
Yes. A CI has more information that a p-value. Technically, a CI tells you "if I did the same thing a whole lot of times, how big a range would I need to capture 95% of the results" which is awkward. But it's more information than the p-value, which just tells you "if the null hypothesis of no correlation in the population were true, how often would I get a value as extreme or more extreme than this?"
Question 3
As I've told you in another answer to a virtually identical question, if either A or B can be thought of as a dependent variable, you can do regression and include "group" as a covariate; you could also include the interaction of group and the independent variable
Bonus
The correction is not needed for the p regarding the difference of correlations, as far as I can see. That's ONE test. You could argue for or against a correction for doing 3 analyses. I don't see how you could get to 0.0006, even the Bonferroni would be .05/3 (or possibly 4). However the null is NEVER supported, you only fail to reject.
HOWEVER, it is completely irrelevant. The three correlations are virtually identical.
Added bonus
Why do you keep asking the same question over and over?
Do you expect different results?
Best Answer
Descriptively, you can say that it is the strongest relationship. Whether it is significantly stronger than the other two depends on your sample size. There's an online calculator for that.
That's the same statistical question as above. Test each pair of correlations for the significance of the difference. As you perform three tests, you might want to think about a correction of the $\alpha$ level. Another possibility elaborated here would be to add age group as a dummy coded variable into a regression analysis.
No. To get an average correlation you have to do an $r$-to-$Z$ transformation (Fisher's $Z$), average these transformed values, and backtransform the average $Z$ to an $r$ again. For the transformation, there are several online calculators.