Generally, you should start from the highest order interactions. You are probably aware that it is usually not sensible to interpret a main effect A when that effect is also involved in an interaction A:B. This is because the interaction tells you that the effect of A actually depends on the level of B, rendering any simple main effect interpretation of A impossible.
In the same way, if you have factors A, B, C, then A:B should not be interpreted if A:B:C is significant.
Thus, when you have a 5-way interaction, none of the lower-order interactions can be sensibly interpreted. Therefore, if I understand you correctly and you have interpreted your lower order interactions, you should probably not continue along those lines.
Rather, what you can do is to split up your data set and continue to analyze factor levels of your data set separately. Which of the factors you use to split up the dataset is arbitrary, but often it is very useful to split up the data for each variable and assess what you see. In your example, you might start with sex, and calculate an ANOVA for males, and another one for females (each ANOVA contains the 4 remaining factors). Just as well, you could split up the data according to ethnicity (one ANOVA for Asian, one for Caucasian).
You could also split up by one of the within-subject factors.
I will assume that you have decided to split the data by sex (just to continue with the example here).
Then, assume that for males, you get a 4-way interaction. You would then go on to split up the male data by one of the remaining variables (say, ethnicity). You would then calculate ANOVAs for male Asians (over the remaining 3 factors), and for male Caucasians.
Importantly, if you get only a lower-order interaction, then you are only "allowed" to analyze these further. This is because the other factors did not show significant differences. Thus, if your males ANOVA gives you only a 2-way interaction, then you would average over the other factors and calculate only an ANOVA over the 2 interacting factors (and, because we are in the male part of the ANOVAs, this would be for the males alone).
For the females, everything may look different, and so the decision which follow-up ANOVAs to calculate is separate for this group. So, what you did for males should be done for females in the same way ONLY if you got the same interactions.
Thus, you will potentially have a lot of ANOVAs, and it might not be easy to decide which ones to report. You should report 1 complete line down from the hightest interaction to the last effects (possibly t-tests to compare only 1 of your factors at the end). You should not usually report several lines (e.g., one starting the split-up by sex, then another one starting by ethnicity). However, you must report a complete line, and cannot simply choose to report only some of the ANOVAs of that line. So, you report one complete analysis, not more, not less. Which way to go in terms of splitting up / follow-up ANOVA is a subjective decision (unless you have clear hypotheses you can follow), and might depend on which results can be understood best etc.
in one way anova, the tested hypothesis is:
h0: b.Freshman = b.Sophmore = b.Junior = 0
h1: else
(b standing for the group coefficient)
so basically your result means that the variance between groups is small and hence cannot be a good explenation to the overall variance in the dataset.
generally ANOVA stands for analysis of variance. unlike regression models it does not try to estimate the coefficients, but rather give a simple answer to the question: "is there any significant difference between the groups". or in other words "how much of the total variance in the dataset can be explained by dividing the data into given groups?"
Best Answer
The significance of the main effect for updates seems to support your view that there is a relationship between the number of updates and well-being. Additionally, you didn't find any evidence of an overall difference in happiness between men and women.
A p-value of .008 means that if people with different number of updates would report being equally happy on average, you would expect to observe a sample like yours or a more extreme one (i.e. one in which apparent differences are even stronger) 0.8% of the time. This is under the conventional threshold of 5%, so you would typically conclude that there is a difference. It's difficult to describe such results simply and it's easy to misinterpret p-value, so if you are not familiar with this you should probably try to read up on it.
Beyond that, there is also apparently an interaction effect, which is a little bit trickier to interpret. It could mean that the number of updates has a stronger association with happiness for men than for women (or the other way around), that the relationship only holds for one gender but not for the other or even that the relationship goes in opposite directions depending on gender (e.g. men updating their page frequently are happier than men who don't whereas women who update are less happy than women who don't). This result does suggest that it could make sense to retain the variable in the model but did you have a reason to believe gender has an effect in the first place?
One caveat is that the number of updates is presumably not under your control. If you learned statistics from books or courses oriented toward psychology, you will often find that they use causal language to describe significant effects but this is predicated upon the fact that the data come from a randomized experiment. You can run an ANOVA on variables like gender or updating frequency but what you have is in effect a correlation, not per se evidence that updating your Facebook page changes your level of happiness. Statistically, the technique is the same but observational data like yours and experimental data afford different conclusions.
A few other thoughts: