It is impossible to perform factor analysis on a single variable. Begin by reading the excerpt of the tag wiki for factor-analysis (hover your cursor over this tag). The smaller number of factors in your single variable case is zero. You have no covariances to consider except the item's own variance. It simply doesn't make sense to try factor analysis on one variable.
Your next step should probably be consulting with your advisor. It sounds like he or she has tossed you into the deep end without a life preserver. Fortunately, there's plenty of information already here, and on Wikipedia, and in books, that could've told you what I've told you in the above paragraph. What your advisor actually wants to know is what none of us can tell you. Chances seem good that your advisor had nothing unusual in mind. Hence I again suggest you read Wikipedia, search for an introduction to factor analysis, or read a statistics textbook chapter on it. Any of these will answer the simple question of how to perform a factor analysis. How to do this in SPSS specifically would probably be judged an off-topic question.
Edit: Thanks for adding the path diagram. It looks like you have five constructs in a structural equation model. With only one variable for Behavioral Intention
, you probably won't want to include it in a factor analysis unless the objective is to demonstrate that it doesn't load as strongly on a factor as the factor's intended indicators. Factor loadings are correlations between a latent variable and manifest (measured) variables. Ideally, each item that measures a factor will have a loading above $\lambda\ge.7$, meaning the factor explains at least 50% of the item's variance. For instance, you'd want the items that measure Innovativeness
to have factor loadings $\ge.5$ (being a little more lenient and realistic) on a single factor, and items that measure other latent constructs (including Behavioral Intention
) to load primarily on other factors, or in the case of Behavioral Intention
, weakly on all factors (say $\lambda\le.3$ or so).
If you were to factor analyze all your items, you'd hope for a four-factor solution (i.e., large eigenvalues for the first four factors that descend gradually in magnitude, and much lower, roughly equal eigenvalues for all others after the fourth). Parallel analysis or VSS might be the easiest analyses to interpret if you're inexperienced with reading scree plots or choosing the number of factors by other means. Your path diagram indicates expected relationships among these factors, so use an oblique rotation, not an orthogonal one, after extracting factors (again, hopefully four will be the right number). This will give you factor pattern loadings that are worth interpreting as described above: all items intended to measure the same factor should load strongly together on the same factor, and have weak loadings on other factors. Your one item for Behavioral Intention
should load weakly on all factors.
If you were to just factor analyze the items for one factor at a time, you'd want to see a single-factor solution for each, and all loadings on the general (first) factor greater than $\lambda\ge.5$ or so. You don't need to rotate a single-factor solution. If you were to add the item for Behavioral Intention
to the factor analysis of any single factor, you'd just want to see it have a much lower loading, lower communality, or higher uniqueness, as compared to all the other items. This should be an easy-enough way to test its discriminant validity; e.g., factor analyze all the items measuring Innovativeness
together with the Behavioral Intention
item. However, one problem with this method is that it would increase your Type I / false alarm / $\alpha$ error rate to perform the comparison multiple times. Might be worth doing anyway though just to be overly liberal in your discriminant validity problem detection approach.
Since it looks like you have preexisting theoretical measurement models available, you could also (probably should, really) use confirmatory methods like structural equation modeling (SEM). Modification indices could tell you if an item from one factor correlates too strongly with a factor it's not supposed to measure directly. However, I think Amos is the SPSS brand SEM software, so you might need access to that to perform this analysis without learning a new software environment. (Like R! I could even give you code in that, but not in Amos...)
You should not attempt to compute a p-value. Simply report the confidence interval. Doing so contains the same relevant information as a p-value (do you have enough evidence to reject the null of no effect?). If the 95% confidence interval excludes 0, the corresponding p-value will necessarily be less than .05. Although people do conduct Wald t-tests by dividing the estimate by the bootstrap standard error to arrive at a t-statistic, if the sampling distribution of the statistic is not symmetric (and it is usually not symmetric in testing indirect effects), using this t-statistic in a t-test is invalid. The confidence interval, which may not be symmetric, is a more valid way to test the null hypothesis with bootstrapping.
Best Answer
I guess your supervisor is correct and the model is implying mediations. In general, most of SEM and Path analysis involve some mediation or indirect effects. The whole point, of these models is to say, instead of everything being related to everything, i.e. like in a correlation matrix, is to say that some relations are good enough to explain the whole variation of the variables. The whole point is to use lesser links between variables from a correlation matrix, to a fewer set of relations (best case scenario) derive from a theoretical model.
Mediation in particular, is the kind of model which people use to say that X is related to Y, because M is such and such. In your case, Innovativeness, is related to future shopping, because the ease of use, is related to usefulness which in turn is related to attitude, and so on.
So, if you want to test that same model, you would need to specify it in AMOS; or other alternative software such as: R (package lavaan), MPLUS, Lisrel, EQS, SAS, and I've heard STATA can deal with path analysis in the current version.
If you would stick with AMOS, a good guide book is:
First, once you have your data, you could fit that model onto your observations. For this case, your first 'test' would consist to asses the degree of fit of the overall model. For this, researchers tend to use a family of Fit statistic: Chi Square, CFI, TFI, RMSEA, SRS.
For common guidelines on what different fit indices mean you can consult:
and also:
Afterwards, the issue is if you want to estimate your indirect effects. Your model, has several indirect effects which can be estimated, for example:
(..etc there are more indirect effects from innovativeness to future intention of shopping, which i'm not mentioning).
Mediational analysis then follow additional rules. The most common is the use of the delta method, to estimate the indirect effect between X (independent) to Y (dependent) via M (mediator). Commonly, is the multiplication of the beta weight from x to m, and from m to y. However, how to estimate the standard errors of this estimate is another story. There Bootstrapping methods for example, and different methods for the case of binary variables as an outcome. Currently, there might be new methods. Mediation analysis is quite a hot topic given its within the whole problem of "how to make causal claims", and probably every year there is new upgrades for common practice.
As an example, you can check this, for a bootstrapped method:
And this for a more current discussion on mediation analysis:
Finally, you may start with a more simple example to grasp how mediation works, you can check this example: http://www.ats.ucla.edu/stat/mplus/seminars/introMplus_part2/path.htm
And, specifically for an walk through example on AMOS, you could check this youtube video: http://www.youtube.com/watch?v=9mf7nIAlH5c
Good Luck!