A way to improve upon ANOVA with an ordinal predictor is to use dummy codes in penalized regression. Penalized regression takes advantage of the ordering among the response categories in Likert scale data: it reduces overfitting by smoothing differences in slope coefficients for dummy variables corresponding to adjacent ranks. See Gertheiss and Tutz (2009) for an overview of penalized regression for ordinal predictors. I've discussed penalized regression here a few times before:
Unfortunately, this approach probably won't do well for categories with very few observations, and I don't know of any that would. Inferential power is necessarily limited with samples so unbalanced as to have very few observations in groups of interest. Correcting for familywise error would raise the bar even further out of reach, even if that's where it belongs. Whether it is depends on whether you mean to test one big hypothesis several times on separate but related measures, or whether you want to evaluate each hypothesis test separately; familywise error adjustment isn't necessary for the latter.
If you can't collect more data, might as well give the test a shot, but give some thought to the degree of evidence you want to see. You probably won't have enough power to distinguish small differences from the null hypothesis with p < .05, so using the Neyman–Pearson framework for dichotomizing p values interpretively is probably unrealistic (more so than usual, that is). There are less polarized ways of understanding p values – one might also call them more equivocal ways, but that's probably more appropriate with relatively weak evidence anyway. For more on interpreting p values, see for instance:
The recommendation to focus on effect size estimation and confidence intervals may help here too, because it is in essence a recommendation to focus on what you can know from your data, even if it's not "enough to reject" a null hypothesis. Plotting your results may help give you a sense of what's really going on too. Don't feel that confirmatory hypothesis testing is your only option unless you have good reason to; you may be able to get some good ideas of hypotheses to test further by just exploring your data, even if you can't really conclude anything very firmly from what you have.
One last option to consider is treating your Likert scale data as continuous. This is a big assumption (that you're effectively already making with regard to your Variable 2), so keep it in mind when interpreting anything you do based on that...but it would allow you to compute correlations between each item's ratings and your Variable 2. In this case especially, you'd not want to collapse the dis
/agree
and strongly dis
/agree
categories. Also bear in mind that a t-test of a correlation assumes bivariate normality, so you might want to consider alternatives for any hypothesis tests on those effect size estimates as well.
Reference
Gertheiss, J., & Tutz, G. (2009). Penalized regression with ordinal predictors. International Statistical Review, 77(3), 345–365. Retrieved from http://epub.ub.uni-muenchen.de/2100/1/tr015.pdf.
Adilah,
Attitudes toward online reading can be assessed with components 1, 3, 4, and 5. Attitude cannot be assessed with component 2 because this component represents self-behavior and the response options for component two are about frequency of behaviors, not attitudes.
Before assessing attitudes, I recommend running a cronbach's alpha test of internal consistency on each set of questions representing the components. The outcomes will tell you whether responses to each question is adequately related to other questions within the same component. If one question does not seem to fit too well, consider dropping it from a component. Make sure that you reverse score negatively phrased items, if there are any, before running cronbach's alpha. Cronbach's alpha is found under Scale in the SPSS Analyze drop down list -- choose reliability analysis.
Next aggregate the outcomes within each component. In other words, sum the responses for each question and then divide by the number of questions. For example, if component 3 (anxiety) is composed of items/questions 1, 2, 3, 4, 5, 6, 7, 8, then for each participant add up all 8 scores and then divide by 8. This will give you an overall component score for anxiety. Your anxiety score can then be compared using factors like gender and race. Note that although the data are ordinal, as you pointed out, when you combine items into an overall score, it is appropriate to use parametric statistics with analyses on the overall component scores.
If you want to determine overall attitude you would need to repeat the above process combining all items except #2. Determining weak vs. strong influence of components on overall attitude could be tricky because an attitude in either the left or right direction can be equally strong. I suppose that components with overall scores closest to the mean (between disagree and agree) would qualify as those being the weakest because it suggests neutral attitude.
Best Answer
Use of Anova with Likert scales is problematic as explained here and here.
My recommendation is to use ordered logistic regression or some other model from Item response theory. In R you can use
polr
function from MASS package (here is a nice tutorial) or use MCMC samplers such as rSTAN or rJAGS for more flexibility.