Solved – Items from two constructs load on one factor in factor analysis

factor analysisscales

I am trying to determine the factor structure of a set of 84 items. Exploratory factor analysis using varimax rotation was conducted to estimate the underlying factor structure for the sample data. Two rounds of exploratory factor analysis were conducted. Items with loading below the generally accepted 0.60 threshold and those that highly cross-loaded on more than one factor were rejected. However, the overall results of the exploratory factor analysis shows that items for two different constructs load on a same factor.

Questions

  • How should I interpret the fact that items for two constructs load on the one factor?
  • Is this a problem?
  • If yes, what could I do?

Best Answer

Why are sets of items from different constructs loading on the same factor?

In my experience, many psychological tests have multiple scales where the correlations between some scales can be quite high (e.g., .6 to .8 correlations). In such a case, there may not be a huge difference between a model where all these items load on one factor versus a model where the items load on different factors. This is further compounded by various other issues: (1) the noise of measurement means that the sample factor structure is an imperfect representation of the population factor structure, especially with small sample sizes (e.g., < 100 or 200); (2) imposing a varimax rotation may hide these intercorrelated factors; (3) other influences on the factor structure beyond the actual constructs of interest may be influencing the factor structure (e.g., item stems, whether an item is reversed, etc.).

Alternatively, you may just be wrong about the factor structure for your test, and what you thought were separate constructs are essentially interchangeable in terms of measurement and the item-level.

Is this a problem?

The first step is to understand why it is occurring. Is it due to writing items that don't sufficiently discriminate the two constructs? Are the two constructs inherently the same at the measurement level? Are they just highly intercorrelated factors which only separate when you allow for more factors? Are there particular items that might be preventing the factors from splitting?

What can you do?

  1. Read the items carefully and consider whether they reasonably do represent two discrete constructs. Perhaps ask a set of experts to divide the items into the two constructs and see if they can do it reliably. If not, you may have an inherent issue with the items or the constructs themselves being the same.
  2. Re-run the factor analysis allowing for one or more extra factors and see whether the items then split into two factors. I've seen this occur many times.

More generally, use this an opportunity to learn about the factor structure of the test. There's probably a story to be discerned.

Related Question