I guess your supervisor is correct and the model is implying mediations. In general, most of SEM and Path analysis involve some mediation or indirect effects. The whole point, of these models is to say, instead of everything being related to everything, i.e. like in a correlation matrix, is to say that some relations are good enough to explain the whole variation of the variables. The whole point is to use lesser links between variables from a correlation matrix, to a fewer set of relations (best case scenario) derive from a theoretical model.
Mediation in particular, is the kind of model which people use to say that X is related to Y, because M is such and such. In your case, Innovativeness, is related to future shopping, because the ease of use, is related to usefulness which in turn is related to attitude, and so on.
So, if you want to test that same model, you would need to specify it in AMOS; or other alternative software such as: R (package lavaan), MPLUS, Lisrel, EQS, SAS, and I've heard STATA can deal with path analysis in the current version.
If you would stick with AMOS, a good guide book is:
Byrne, B. M. (2009). Structural Equation Modeling With AMOS: Basic Concepts, Applications, and Programming. Routledge Academic.
First, once you have your data, you could fit that model onto your observations. For this case, your first 'test' would consist to asses the degree of fit of the overall model. For this, researchers tend to use a family of Fit statistic: Chi Square, CFI, TFI, RMSEA, SRS.
For common guidelines on what different fit indices mean you can consult:
Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: Tests of significance and descriptive goodness-of-fit measures. Methods of Psychological Research Online, 8(2), 23–74.
and also:
Kline, R. B. (2010). Principles and Practice of Structural Equation Modeling, Third Edition (Third Edition.). The Guilford Press.
Afterwards, the issue is if you want to estimate your indirect effects. Your model, has several indirect effects which can be estimated, for example:
- ease of use, is related to future shopping, via attitude
- ease of use, is related to future shopping, via usefulness
- ease of use, is related to future shopping, via usefulness and then via attitude
- usefulness is related to future shopping, via attitude
- ease of use, connection with future shopping, is totally mediated by usefulness and attitude (hence the path from ease of use to future shopping doesn't have a depicted line, which in terms of a model, means a path fixed to zero).
(..etc there are more indirect effects from innovativeness to future intention of shopping, which i'm not mentioning).
Mediational analysis then follow additional rules. The most common is the use of the delta method, to estimate the indirect effect between X (independent) to Y (dependent) via M (mediator). Commonly, is the multiplication of the beta weight from x to m, and from m to y. However, how to estimate the standard errors of this estimate is another story. There Bootstrapping methods for example, and different methods for the case of binary variables as an outcome. Currently, there might be new methods. Mediation analysis is quite a hot topic given its within the whole problem of "how to make causal claims", and probably every year there is new upgrades for common practice.
As an example, you can check this, for a bootstrapped method:
Shrout, P. E., & Bolger, N. (2002). Mediation in experimental and nonexperimental studies: new procedures and recommendations. Psychological Methods; Psychological Methods, 7(4), 422.
And this for a more current discussion on mediation analysis:
Hayes, A. F., & Scharkow, M. (2013). The Relative Trustworthiness of Inferential Tests of the Indirect Effect in Statistical Mediation Analysis Does Method Really Matter? Psychological Science, 24(10), 1918–1927. doi:10.1177/0956797613480187
Finally, you may start with a more simple example to grasp how mediation works, you can check this example:
http://www.ats.ucla.edu/stat/mplus/seminars/introMplus_part2/path.htm
And, specifically for an walk through example on AMOS, you could check this youtube video:
http://www.youtube.com/watch?v=9mf7nIAlH5c
Good Luck!
Firstly, your supervisor should explain factor analysis to you. That's why he gets paid the big bucks.
But I guess it's up to old CV to plug the gaps of the educational system.
It would be nice if you could get AMOS with your SPSS system, or possibly use sem or lavaan in R, since I think your research question should probably be addressed through confirmatory factor analysis. What SPSS offers is just an exploratory analysis. So far, that seems to have worked well, since it looks as if the analysis produced the three categories that you believe are operative. Note that Varimax will always produce uncorrelated factors. That's what it does.
So what is factor analysis doing? You have a questionnaire with items, but what really interests you are certain underlying characteristics or "categories" that you can't measure directly. You measure these indirectly through the items of the questionnaire. You want the questionnaire to detect those categories. So perhaps questions 1-3 target the first category; 4-6 target the second.
If this model is correct, then the variance matrix of the 9 items will have a particular structure, reflective of the underlying categories. Confirmatory factor analysis lets you test that hypothesis.
Alternative hypotheses could be that all 9 items reflect only 1 category ... or at the other extreme, that there are no underlying categories that can simplify the variance structure. Confirmatory factor analysis would then check that these categories are relevant to the demographic you have.
Factor loadings are sort of the regression coefficients of the items against the underlying factors or categories, if in fact, you could measure those underlying factors. What you get from SPSS, I believe, assumes that the factors are scaled to have variance 1.
I'm not sure that high loadings from a category mean that the category is "important" to your demographic. It does suggest that the factor is present and well manifested by the questions. It also implies that people's responses are very much governed by the factor, and less by randomness. It might help if you specified what these categories are.
Best Answer
It is impossible to perform factor analysis on a single variable. Begin by reading the excerpt of the tag wiki for factor-analysis (hover your cursor over this tag). The smaller number of factors in your single variable case is zero. You have no covariances to consider except the item's own variance. It simply doesn't make sense to try factor analysis on one variable.
Your next step should probably be consulting with your advisor. It sounds like he or she has tossed you into the deep end without a life preserver. Fortunately, there's plenty of information already here, and on Wikipedia, and in books, that could've told you what I've told you in the above paragraph. What your advisor actually wants to know is what none of us can tell you. Chances seem good that your advisor had nothing unusual in mind. Hence I again suggest you read Wikipedia, search for an introduction to factor analysis, or read a statistics textbook chapter on it. Any of these will answer the simple question of how to perform a factor analysis. How to do this in SPSS specifically would probably be judged an off-topic question.
Edit: Thanks for adding the path diagram. It looks like you have five constructs in a structural equation model. With only one variable for
Behavioral Intention
, you probably won't want to include it in a factor analysis unless the objective is to demonstrate that it doesn't load as strongly on a factor as the factor's intended indicators. Factor loadings are correlations between a latent variable and manifest (measured) variables. Ideally, each item that measures a factor will have a loading above $\lambda\ge.7$, meaning the factor explains at least 50% of the item's variance. For instance, you'd want the items that measureInnovativeness
to have factor loadings $\ge.5$ (being a little more lenient and realistic) on a single factor, and items that measure other latent constructs (includingBehavioral Intention
) to load primarily on other factors, or in the case ofBehavioral Intention
, weakly on all factors (say $\lambda\le.3$ or so).If you were to factor analyze all your items, you'd hope for a four-factor solution (i.e., large eigenvalues for the first four factors that descend gradually in magnitude, and much lower, roughly equal eigenvalues for all others after the fourth). Parallel analysis or VSS might be the easiest analyses to interpret if you're inexperienced with reading scree plots or choosing the number of factors by other means. Your path diagram indicates expected relationships among these factors, so use an oblique rotation, not an orthogonal one, after extracting factors (again, hopefully four will be the right number). This will give you factor pattern loadings that are worth interpreting as described above: all items intended to measure the same factor should load strongly together on the same factor, and have weak loadings on other factors. Your one item for
Behavioral Intention
should load weakly on all factors.If you were to just factor analyze the items for one factor at a time, you'd want to see a single-factor solution for each, and all loadings on the general (first) factor greater than $\lambda\ge.5$ or so. You don't need to rotate a single-factor solution. If you were to add the item for
Behavioral Intention
to the factor analysis of any single factor, you'd just want to see it have a much lower loading, lower communality, or higher uniqueness, as compared to all the other items. This should be an easy-enough way to test its discriminant validity; e.g., factor analyze all the items measuringInnovativeness
together with theBehavioral Intention
item. However, one problem with this method is that it would increase your Type I / false alarm / $\alpha$ error rate to perform the comparison multiple times. Might be worth doing anyway though just to be overly liberal in your discriminant validity problem detection approach.Since it looks like you have preexisting theoretical measurement models available, you could also (probably should, really) use confirmatory methods like structural equation modeling (SEM). Modification indices could tell you if an item from one factor correlates too strongly with a factor it's not supposed to measure directly. However, I think Amos is the SPSS brand SEM software, so you might need access to that to perform this analysis without learning a new software environment. (Like R! I could even give you code in that, but not in Amos...)