One point that may help your understanding:
If $x$ is normally distributed and $a$ and $b$ are constants, then $y=\frac{x-a}{b}$ is also normally distributed (but with a possibly different mean and variance).
Since the residuals are just the y values minus the estimated mean (standardized residuals are also divided by an estimate of the standard error) then if the y values are normally distributed then the residuals are as well and the other way around. So when we talk about theory or assumptions it does not matter which we talk about because one implies the other.
So for the questions this leads to:
- yes, both, either
- No, (however the individual y-values will come from normals with different means which can make them look non-normal if grouped together)
- Normality of residuals means normality of groups, however it can be good to examine residuals or y-values by groups in some cases (pooling may obscure non-normality that is obvious in a group) or looking all together in other cases (not enough observations per group to determine, but all together you can tell).
- This depends on what you mean by compare, how big your sample size is, and your feelings on "Approximate". The normality assumption is only required for tests/intervals on the results, you can fit the model and describe the point estimates whether there is normality or not. The Central Limit Theorem says that if the sample size is large enough then the estimates will be approximately normal even if the residuals are not.
- It depends on what question your are trying to answer and how "approximate" your are happy with.
Another point that is important to understand (but is often conflated in learning) is that there are 2 types of residuals here: The theoretical residuals which are the differences between the observed values and the true theoretical model, and the observed residuals which are the differences between the observed values and the estimates from the currently fitted model. We assume that the theoretical residuals are iid normal. The observed residuals are not i, i, or distributed normal (but do have a mean of 0). However, for practical purposes the observed residuals do estimate the theoretical residuals and are therefore still useful for diagnostics.
When you have heteroskedasticity, it doesn't make sense to try to check normality of the entire set of residuals, though you could still check groups individually (with corresponding loss of power of course).
On the other hand, it doesn't really make sense to formally test either normality or heterosckedasticity when checking assumptions, since the hypothesis tests answer the wrong question.
This is because your data aren't actually normal (and it's also very unlikely that your populations have identical variance) - so you already know the answer to the question the hypothesis test checks for. With a nice large sample like you have, the chance that a nice powerful test like the Shapiro-Wilk doesn't pick it up is small - so you'll reject as non-normal data from distributions that will have little impact on the signficance level or the power. That is, you'll tend to reject normality - even at quite small significance levels - when it really doesn't matter. The test is likely to reject when it matters least (i.e. when you have a big sample).
What you actually want to know is the answer to a different question than the test answers - something like "How much does this affect my inference?". The hypothesis test doesn't address that question - it doesn't consider the impact of the non-normality, only your ability to detect it.
Further, when you have a sequence "do this equal-variance normal theory analysis if I don't reject all these tests, otherwise do this analysis if I reject that one, this other analysis if I reject the other one and something else if I reject both" we must consider the properties of the whole scheme. Such programs of testing usually do worse (in terms of test bias, accuracy of significance level and very often, of power) than just assuming you'd rejected both tests.
So you recommend to just stick with the ANOVA and consider no assumptions violated?
Not quite. In fact, if anything my last sentence above suggests that you assume heteroskedasticity and normality are violated from the start. Either one alone being violated is relatively easy to deal with, both together is a little trickier (but still possible). However, in your case I think you're probably okay, since I think you needn't worry about one of the two:
Normality may not be such a problem - the considerations would be what kind of non-normality might you have, and how strongly non-normal, and with how large a sample size?
Your sample size seems reasonably large and the distribution pretty mildly left skew and light-tailed, though that assessment may be confounded with the heteroskedasticity. However, if you had good understanding of the properties of what was being measured - which you may well have - or information from similar studies, you might be able to make an a priori assessment on that basis and so better able to choose an appropriate procedure (though I'd still suggest diagnostic checking).
Since your data are probability estimates, they'll be bounded. In fact the left skewness may simply be caused by some probability assessments getting relatively close to 100. If that's the case, you must also tend to doubt your assumption of linearity, and that will be a likely cause of heteroskedasticity as well. If my guess about getting close to the upper bound is right you'll tend to see lower spread among the groups with higher mean.
You might consider an analysis suited to continuous proportions, perhaps a beta-regression - at least if you have no data exactly on either boundary. (An alternative might be a transformation, but models that deal with the data you have tend to be both easier to defend and more interpretable.)
With your decent-sized sample, you are probably safe enough on non-normality, but heteroskedasticity might be more of an issue - in particular, heteroskedasticity issues don't decrease with sample size.
On the other hand, if your sample sizes are equal (or at least very nearly equal), heteroskedasticity is of little consequence. Your tests will be little impacted in the case of equal sample sizes.
If equal-sample-sizes are not the case, I suggest you:
i) don't assume heteroskedasticity will simply be okay
ii) don't formally test it, for the same reasons outlined above (testing answers the wrong question)
Instead I suggest you start with the assumption that the variances differ - whether it's to use something like the Welch approach (I can't say I know how that works for 2x2 off the top of my head, but it should be quite possible to make it work in that case, since it only affects the calculation of residual variance and its df), or to implement your ANOVA in a regression and move to something like heteroskedasticity-consistent standard errors (which is used more widely in areas like econometrics).
Best Answer
As Peter said you probably have little to worry about as the normality assumption would seem reasonable given your plots. Remember, not only is the normality assumption not too big a deal (as studies have indicated that the F-tests are fairly robust to non-normality), but also that the residuals will never be exactly normal anyway. In fact, technically, they never will not least because the standard normal is bounded from negative infinity to positive infinity.
Remember, the question is whether the normal distribution would seem like a reasonable model for the residuals, and therefore whether your data conditional on your model (so the cells of your ANOVA) are reasonably normally distributed themselves. The assumption of the ANOVA is that all your groups represent samples from normal populations with the same variance, and therefore the only differences to be assessed are the location of their means. You just need to be happy that you think your data reflects this theoretical scenario.