Having several repeated-measures DVs one can apply a univariate approach (also called Repeated Measures sensu stricto or split-plot approach) or multivariate approach (or MANOVA). In univariate approach, RM levels are treated as deviations from one variable, their average level. In multivariate approach, RM levels are treated as covariates of each other. Univariate approach requires sphericity assumption while multivariate approach does not, and because of this it is becoming more popular indeed. However, it spends more df and thus needs larger sample size. Also, univariate approach retains its popularity because it generalizes to Mixed models. When sphericity assumption (and beyond expectation more general compound symmetry assumption too) holds results by both approaches are very similar, as far as I know.
so... here is a bit of a dog's breakfast of suggestions
There are more ways to approach this than the options you give yourself. One of them might be to take your three reward levels, one being neutral, and turn them into two reward effects. So, if C is neutral reward, and A and B are test levels makes up an A effect (A-C) and a B effect (B-C) and then compare them to each other. Because there are only two levels sphericity is not an issue. And then you're actually comparing your two effects. Do not make the mistake of testing A-C, and finding it significant and B-C, and finding it not, and then concluding there's a difference between A-C, and B-C. The difference between those two may not be significant in itself.
Mauchly's test, like all such tests, isn't terribly useful. It will fail all of the time with enought power, even if the sphericity violation isn't too bad and you can pass it all of the time with very low power. It definitely shouldn't be used like a hypothesis test in the Neyman-Pearson sense. No test of assumptions should be. Meeting your assumptions is a qualitative decision and Mauchly can help you with that but it's not used as a hard decision rule. Along those same lines, always using GG corrections can reduce the amount of Type I error that you make, as you inquired. However, it can also increase the amount of Type II.
And assuming sphericity isn't invasive at all... Describing it that way makes it sounds like you have a little bit too much reverence for your data and believe there's some church of decisions here. If you want to be conservative in your tests, GG everything and Bonferroni correct. But, if you do that, recognize that you're possibly making Type II errors and note that in your write up. If you don't want to do any of that then don't, but make sure that then you draw weaker conclusions about your tests, especially multiple ones and use them to point the way for future researchers to look.
If you want to go multivariate knock yourself out. It helps with the sphericity issue, if there is one. It doesn't fix multiple testing issues. But you should pick one beforehand and stick with it, not run all kinds of different analyses and see which makes your results look better. That's a whole different level of multiple comparisons. Posting your actual Mauchly test numbers and GG corrections on here might result in you getting some expert advice on how large a violation they are. It's unlikely they're big given that you only have 3-levels.
Speaking of 3-levels, there are no GG corrections for when you have two levels. There is no test of sphericity then. If you decide to make three comparisons, A-B, A-C, B-C, then none of these if GG corrected.
A final option you haven't mentioned is to calculate confidence intervals for each of your 3 comparisons and speak about them rationally. You could alpha adjust them if you wish, or even put two levels of bars on a graph. Then you just describe what are likely and unlikely to be the true values of the effect. So, if A-C does not cross 0 then 0 is an unlikely value. Only the values within the CI are likely.
As you can see, the reason you haven't gotten hard answers to your questions is that, despite your having formulated them well, there are only hard answers to small aspects of what you want to know. That can get firmer when someone has real numbers and hypotheses to deal with. To get more specific help in your description, multiple comparison issues, and some expert advice on how to treat your data, then make a new query with your analyses posted, numbers and what they mean, what your hypotheses are, and what you hope to find out or discuss. That will be more likely to land you some hard advice that's useful.
Best Answer
Intuition behind sphericity assumption
One of the assumptions of common, non repeated measures, ANOVA is equal variance in all groups.
(We can understand it because equal variance, also known as homoscedasticity, is needed for the OLS estimator in linear regression to be BLUE and for the corresponding t-tests to be valid, see Gauss–Markov theorem. And ANOVA can be implemented as linear regression.)
So let's try to reduce the RM-ANOVA case to the non-RM case. For simplicity, I will be dealing with one-factor RM-ANOVA (without any between-subject effects) that has $n$ subjects recorded in $k$ RM conditions.
Each subject can have their own subject-specific offset, or intercept. If we subtract values in one group from values in all other groups, we will cancel these intercepts and arrive to the situation when we can use non-RM-ANOVA to test if these $k-1$ group differences are all zero. For this test to be valid, we need an assumption of equal variances of these $k-1$ differences.
Now we can subtract group #2 from all other groups, again arriving at $k-1$ differences that also should have equal variances. For each group out of $k$, the variances of the corresponding $k-1$ differences should be equal. It quickly follows that all $k(k-1)/2$ possible differences should be equal.
Which is precisely the sphericity assumption.
Why shouldn't group variances be equal themselves?
When we think of RM-ANOVA, we usually think of a simple additive mixed-model-style model of the form $$y_{ij}=\mu+\alpha_i + \beta_j + \epsilon_{ij},$$ where $\alpha_i$ are subject effects, $\beta_j$ are condition effects, and $\epsilon\sim\mathcal N(0,\sigma^2)$.
For this model, group differences will follow $\mathcal N(\beta_{j_1} - \beta_{j_2}, 2\sigma^2)$, i.e. will all have the same variance $2\sigma^2$, so sphericity holds. But each group will follow a mixture of $n$ Gaussians with means at $\alpha_i$ and variances $\sigma^2$, which is some complicated distribution with variance $V(\vec \alpha, \sigma^2)$ that is constant across groups.
So in this model, indeed, group variances are the same too. Group covariances are also the same, meaning that this model implies compound symmetry. This is a more stringent condition as compared to sphericity. As my intuitive argument above shows, RM-ANOVA can work fine in the more general situation, when the additive model written above does not hold.
Precise mathematical statement
I am going to add here something from the Huynh & Feldt, 1970, Conditions Under Which Mean Square Ratios in Repeated Measurements Designs Have Exact $F$-Distributions.
What happens when sphericity breaks?
When sphericity does not hold, we can probably expect RM-ANOVA to (i) have inflated size (more type I errors), (ii) have decreased power (more type II errors). One can explore this by simulations, but I am not going to do it here.